Sunday, December 15, 2013

Better


I'm now controlling the layer collision by use of a raycast, and the results are better but not perfect. I cast from about knee level down to a bit past foot level, and if there's a collision there I turn the branches on, otherwise they are off. This does neatly solve the problem of the player banging into the sides of colliders on the down arc of a jump, but it introduces another problem: some of the branches are close enough together vertically that when the player jumps, his head hits the branch above before the ray leaves the branch below, and thus the overhead branch doesn't get turned off. I've tried tweaking the length of the ray but it has to be at least a certain length, otherwise certain slanted platforms have trouble registering and the player falls through.

The best thing to do here is probably to raise the problem branches so that they don't cause as much head-bumpage. It's an interesting lesson in that everything has to react to everything else, the movement mechanisms and the environment design are totally interdependent. Well, it was about time for an art push on this level anyway, so I guess it's not too much trouble to move some tree branches around. Here's the final version of the raycast script:


using UnityEngine;
using System.Collections;

public class footShooter : MonoBehaviour {
 RaycastHit info;
 CharacterController controller;
 Vector3 center;
 float floorDist;
 CapsuleCollider cap;

 Vector3 lowCenter;
 void Start () 
 { 
  cap = gameObject.GetComponent();
  controller = gameObject.GetComponent(); 
  floorDist =  controller.height/3;
 }

 void Update () {
  center = new Vector3(transform.position.x + (controller.center.x), transform.position.y + (controller.center.y), transform.position.z + (controller.center.z));
  lowCenter = new Vector3(center.x, center.y - (controller.bounds.size.y/3), center.z);
  Ray footRay = new Ray(lowCenter, Vector3.down);

  if ((controller != null) && (cap != null))
  {
   if (!controller.isGrounded)
   { 
    Debug.Log("centre = " + center);
    if(Physics.Raycast(footRay, out info, floorDist))
    {
     Debug.Log("red ray");
     Debug.DrawRay(footRay.origin, footRay.direction*floorDist, Color.red);
     Debug.Log("hitting branches");
     Physics.IgnoreLayerCollision(10, 13, false);
    }
    else
    {
     Debug.Log("blue ray");
     Debug.DrawRay(footRay.origin, footRay.direction*floorDist, Color.blue);
     Debug.Log("ignoring branches");
     Physics.IgnoreLayerCollision(10, 13, true);
    }
   }
  }
  else
  {
   Debug.Log("controller is NULL");
  }
  
 }
}


Sunday, December 8, 2013

The Cult Of Ray


Raycasting is a common, essential technique in modern game making that I had until now more or less avoided. I'd need it in order to make my one-way platforms work as intended, but I was having trouble in the context of the platformer so I decided to back up and go to a blank project, and to kill two birds it seemed like a good moment to set up something I've had in my notes for awhile: a template for experimenting with FPS gameplay in Unity.

As laid out by this tutorial, a basic fps setup is no further away than a plane, a camera, and a First Person Controller dragged out of the Standard Assets folder and into scene view. Before I could get to the raycasting though I needed to scratch a different itch: I had become spoiled by modern PC FPS titles into expecting gamepad support as a seamless option to mouselook. 

Before you invite me to turn in my PC gamer badge and graphics card, consider a game like the excellent Metro: Last Light, which I'm currently playing through thanks to the fall Steam sale. I've experimented with both control schemes, and while, yes, you can't beat a mouse for combat turning and aiming speed, when you're involved in an "adventure FPS" with a lot of different player verbs, it just feels really natural to have something like "wipe condensation from gas mask" on a shoulder button. Maybe this goes back to my lack of skill as a typist, but unless we're talking about WASD I'm never quite certain of my keyboard execution in a twitch situation. The controller is also just more interesting to me for whatever reason. 

So I had to have my 360 controller, and I did it by replacing the script MouseLook (part of that Standard Assets character package) with StickAndMouseLook, which goes something like this:

using UnityEngine;
using System.Collections;

[AddComponentMenu("Camera-Control/Stick and Mouse Look")]
public class StickandMouseLook : MonoBehaviour {

 public float sensitivityX = 15F;
 public float sensitivityY = 15F;

 public float minimumX = -360F;
 public float maximumX = 360F;

 public float minimumY = -60F;
 public float maximumY = 60F;

 float rotationY = 0F;
 string[] jNames;
 
 bool isGamepad;
 
 void Update ()
 {
  jNames = Input.GetJoystickNames();
  if (jNames.Length > 0)
  {
   //Debug.Log(jNames[0]);
   isGamepad = true;
  }
  else
  {
   //Debug.Log("no gamepad");
   isGamepad = false;
  }
  
  if (isGamepad)
  { 
   float rotationX = transform.localEulerAngles.y + Input.GetAxis("rightStickX") * sensitivityX;
    
   rotationY += Input.GetAxis("rightStickY") * sensitivityY;
   rotationY = Mathf.Clamp (rotationY, minimumY, maximumY);
    
   transform.localEulerAngles = new Vector3(-rotationY, rotationX, 0);
  }
  else
  {
   float rotationX = transform.localEulerAngles.y + Input.GetAxis("Mouse X") * sensitivityX;
    
   rotationY += Input.GetAxis("Mouse Y") * sensitivityY;
   rotationY = Mathf.Clamp (rotationY, minimumY, maximumY);
    
   transform.localEulerAngles = new Vector3(-rotationY, rotationX, 0);   
  }
 }
 
 void Start ()
 {
  // Make the rigid body not change rotation
  if (rigidbody)
   rigidbody.freezeRotation = true;
 }
}

Of course you need for those axes to actually exist, so you set them up in the Input settings like this:
Now you can use the mouse, plug in a gamepad, play that way, unplug it and you're back to mouselook. Movement gets handled automatically by the character motor because it uses the horizontal / vertical input axes, which the sticks already do as well. Interesting to note that the triggers are read as axes themselves rather than buttons, requiring some fiddling elsewhere to get semi-auto behavior, and that the Invert checkbox is required to get "move up to look up" behavior (you know, Normal Person style) out of the Y stick. Either I set something up backwards or one of the coders behind this is one of those sickos who thinks Inverted Y controls should be the status quo. I'll give them the benefit of the doubt and assume the mistake is mine.

So, raycasts. The vexing stumbling block for me was always this: rays are invisible. Physics.Raycast takes as parameters an origin, a direction, and a distance. Debug.Ray can be used to draw rays, but its parameters only have an origin and a direction, no distance. The reason behind this involves some of that spooky math stuff, wherein a vector can contain both a direction and a distance. There's some normalization math that could be involved here but of course once I saw that I ran in the opposite direction.

As it turns out, if you have the ray you have all the info you need to draw it. Take this ray:

Ray shotRay = Camera.main.ViewportPointToRay(new Vector3(0.5f, 0.5f, 0.0f));

Since that's just specifying a point on the main camera, from which it will travel until it hits something, it was far from clear initially how I would get the position and direction/distance information. Luckily, the Ray object itself contains some information, so shotRay.origin can be used as the originating point. The second variable can be got by subtracting the Vector3 from which the shot came from the Vector3 at which the shot hit. So, assuming this script lives on the shooter, you can do this:

Ray shotRay = Camera.main.ViewportPointToRay(new Vector3(0.5f, 0.5f, 0.0f));
    RaycastHit info;
    if (Physics.Raycast(shotRay, out info))
    {
     Debug.DrawRay (shotRay.origin, info.point-transform.position, Color.blue, 3.0f);
      
    }

And for three seconds you'll get a visual image of your raycast in scene view. I was emboldened enough by this to wonder if I could send hit particles out at an angle relative to the incoming angle, and sure enough there's a Vector3.Reflect. the RaycastHit info variable above comes back into play as \you can get the incoming hit direction like this:

inDir = info.point - player.transform.position;

and that same handy info variable contains the normal I need for the reflection, that is to say a line straight out of the plane we hit. Since I don't care where this line ends I was able to cheat in a magnitude for the debug line by multiplying it with an int, which is a little crazy but I'm just gonna go with it:

Vector3 reflectionDirection = Vector3.Reflect(inDir, info.normal);
Debug.DrawRay(info.point, reflectionDirection*100, Color.cyan, 3);


This gives us the rays depicted at the top of this post. I'm thinking (hoping) that armed with this I can turn back to the platformer and easily get the collision behavior I want. As soon as I'm done with this other idea I have about making these big blocks destructible. Shouldn't take but a few minutes...