Sunday, December 15, 2013

Better


I'm now controlling the layer collision by use of a raycast, and the results are better but not perfect. I cast from about knee level down to a bit past foot level, and if there's a collision there I turn the branches on, otherwise they are off. This does neatly solve the problem of the player banging into the sides of colliders on the down arc of a jump, but it introduces another problem: some of the branches are close enough together vertically that when the player jumps, his head hits the branch above before the ray leaves the branch below, and thus the overhead branch doesn't get turned off. I've tried tweaking the length of the ray but it has to be at least a certain length, otherwise certain slanted platforms have trouble registering and the player falls through.

The best thing to do here is probably to raise the problem branches so that they don't cause as much head-bumpage. It's an interesting lesson in that everything has to react to everything else, the movement mechanisms and the environment design are totally interdependent. Well, it was about time for an art push on this level anyway, so I guess it's not too much trouble to move some tree branches around. Here's the final version of the raycast script:


using UnityEngine;
using System.Collections;

public class footShooter : MonoBehaviour {
 RaycastHit info;
 CharacterController controller;
 Vector3 center;
 float floorDist;
 CapsuleCollider cap;

 Vector3 lowCenter;
 void Start () 
 { 
  cap = gameObject.GetComponent();
  controller = gameObject.GetComponent(); 
  floorDist =  controller.height/3;
 }

 void Update () {
  center = new Vector3(transform.position.x + (controller.center.x), transform.position.y + (controller.center.y), transform.position.z + (controller.center.z));
  lowCenter = new Vector3(center.x, center.y - (controller.bounds.size.y/3), center.z);
  Ray footRay = new Ray(lowCenter, Vector3.down);

  if ((controller != null) && (cap != null))
  {
   if (!controller.isGrounded)
   { 
    Debug.Log("centre = " + center);
    if(Physics.Raycast(footRay, out info, floorDist))
    {
     Debug.Log("red ray");
     Debug.DrawRay(footRay.origin, footRay.direction*floorDist, Color.red);
     Debug.Log("hitting branches");
     Physics.IgnoreLayerCollision(10, 13, false);
    }
    else
    {
     Debug.Log("blue ray");
     Debug.DrawRay(footRay.origin, footRay.direction*floorDist, Color.blue);
     Debug.Log("ignoring branches");
     Physics.IgnoreLayerCollision(10, 13, true);
    }
   }
  }
  else
  {
   Debug.Log("controller is NULL");
  }
  
 }
}


Sunday, December 8, 2013

The Cult Of Ray


Raycasting is a common, essential technique in modern game making that I had until now more or less avoided. I'd need it in order to make my one-way platforms work as intended, but I was having trouble in the context of the platformer so I decided to back up and go to a blank project, and to kill two birds it seemed like a good moment to set up something I've had in my notes for awhile: a template for experimenting with FPS gameplay in Unity.

As laid out by this tutorial, a basic fps setup is no further away than a plane, a camera, and a First Person Controller dragged out of the Standard Assets folder and into scene view. Before I could get to the raycasting though I needed to scratch a different itch: I had become spoiled by modern PC FPS titles into expecting gamepad support as a seamless option to mouselook. 

Before you invite me to turn in my PC gamer badge and graphics card, consider a game like the excellent Metro: Last Light, which I'm currently playing through thanks to the fall Steam sale. I've experimented with both control schemes, and while, yes, you can't beat a mouse for combat turning and aiming speed, when you're involved in an "adventure FPS" with a lot of different player verbs, it just feels really natural to have something like "wipe condensation from gas mask" on a shoulder button. Maybe this goes back to my lack of skill as a typist, but unless we're talking about WASD I'm never quite certain of my keyboard execution in a twitch situation. The controller is also just more interesting to me for whatever reason. 

So I had to have my 360 controller, and I did it by replacing the script MouseLook (part of that Standard Assets character package) with StickAndMouseLook, which goes something like this:

using UnityEngine;
using System.Collections;

[AddComponentMenu("Camera-Control/Stick and Mouse Look")]
public class StickandMouseLook : MonoBehaviour {

 public float sensitivityX = 15F;
 public float sensitivityY = 15F;

 public float minimumX = -360F;
 public float maximumX = 360F;

 public float minimumY = -60F;
 public float maximumY = 60F;

 float rotationY = 0F;
 string[] jNames;
 
 bool isGamepad;
 
 void Update ()
 {
  jNames = Input.GetJoystickNames();
  if (jNames.Length > 0)
  {
   //Debug.Log(jNames[0]);
   isGamepad = true;
  }
  else
  {
   //Debug.Log("no gamepad");
   isGamepad = false;
  }
  
  if (isGamepad)
  { 
   float rotationX = transform.localEulerAngles.y + Input.GetAxis("rightStickX") * sensitivityX;
    
   rotationY += Input.GetAxis("rightStickY") * sensitivityY;
   rotationY = Mathf.Clamp (rotationY, minimumY, maximumY);
    
   transform.localEulerAngles = new Vector3(-rotationY, rotationX, 0);
  }
  else
  {
   float rotationX = transform.localEulerAngles.y + Input.GetAxis("Mouse X") * sensitivityX;
    
   rotationY += Input.GetAxis("Mouse Y") * sensitivityY;
   rotationY = Mathf.Clamp (rotationY, minimumY, maximumY);
    
   transform.localEulerAngles = new Vector3(-rotationY, rotationX, 0);   
  }
 }
 
 void Start ()
 {
  // Make the rigid body not change rotation
  if (rigidbody)
   rigidbody.freezeRotation = true;
 }
}

Of course you need for those axes to actually exist, so you set them up in the Input settings like this:
Now you can use the mouse, plug in a gamepad, play that way, unplug it and you're back to mouselook. Movement gets handled automatically by the character motor because it uses the horizontal / vertical input axes, which the sticks already do as well. Interesting to note that the triggers are read as axes themselves rather than buttons, requiring some fiddling elsewhere to get semi-auto behavior, and that the Invert checkbox is required to get "move up to look up" behavior (you know, Normal Person style) out of the Y stick. Either I set something up backwards or one of the coders behind this is one of those sickos who thinks Inverted Y controls should be the status quo. I'll give them the benefit of the doubt and assume the mistake is mine.

So, raycasts. The vexing stumbling block for me was always this: rays are invisible. Physics.Raycast takes as parameters an origin, a direction, and a distance. Debug.Ray can be used to draw rays, but its parameters only have an origin and a direction, no distance. The reason behind this involves some of that spooky math stuff, wherein a vector can contain both a direction and a distance. There's some normalization math that could be involved here but of course once I saw that I ran in the opposite direction.

As it turns out, if you have the ray you have all the info you need to draw it. Take this ray:

Ray shotRay = Camera.main.ViewportPointToRay(new Vector3(0.5f, 0.5f, 0.0f));

Since that's just specifying a point on the main camera, from which it will travel until it hits something, it was far from clear initially how I would get the position and direction/distance information. Luckily, the Ray object itself contains some information, so shotRay.origin can be used as the originating point. The second variable can be got by subtracting the Vector3 from which the shot came from the Vector3 at which the shot hit. So, assuming this script lives on the shooter, you can do this:

Ray shotRay = Camera.main.ViewportPointToRay(new Vector3(0.5f, 0.5f, 0.0f));
    RaycastHit info;
    if (Physics.Raycast(shotRay, out info))
    {
     Debug.DrawRay (shotRay.origin, info.point-transform.position, Color.blue, 3.0f);
      
    }

And for three seconds you'll get a visual image of your raycast in scene view. I was emboldened enough by this to wonder if I could send hit particles out at an angle relative to the incoming angle, and sure enough there's a Vector3.Reflect. the RaycastHit info variable above comes back into play as \you can get the incoming hit direction like this:

inDir = info.point - player.transform.position;

and that same handy info variable contains the normal I need for the reflection, that is to say a line straight out of the plane we hit. Since I don't care where this line ends I was able to cheat in a magnitude for the debug line by multiplying it with an int, which is a little crazy but I'm just gonna go with it:

Vector3 reflectionDirection = Vector3.Reflect(inDir, info.normal);
Debug.DrawRay(info.point, reflectionDirection*100, Color.cyan, 3);


This gives us the rays depicted at the top of this post. I'm thinking (hoping) that armed with this I can turn back to the platformer and easily get the collision behavior I want. As soon as I'm done with this other idea I have about making these big blocks destructible. Shouldn't take but a few minutes...

Friday, November 29, 2013

In The Trees, In The Weeds


I've entered some impossible place where every task takes twice as long as the previous task. The goal was and is still three playable levels, with a few neat unlocks and a couple more enemies. Level 2 is supposed to represent Seattle's Queen Anne neighborhood, with its imposing hill spreading up the right side of the screen. There's not much platformer action to be had on a single steep hill, so what to make platforms out of?

Pioneer Square had the easy visual metaphor of that big iron lattice pergola (yeah I had to look up what the hell that thing's called), but nothing so obvious exists in QA. Trees were an obvious solution, I remember planning to have a few in one area or another, and they are about as classic a platform-disguise trope as you can get. One problem that arose quickly is that I can't draw a convincing tree.

The method I ended up with is far from ideal but somehow the appearance was able to barely pass my internal editor. After all, I had to come up with something. I drew out the trunk and each branch, as well as four squiggly bare "sub-branches". I flipped each of the sub-branches for a total of 8, then took a green 32x32 "leaf" and used it as a stamp across all of those, as shown here:



Then each individual branch of the tree gets stamped with a selection of these sub-branches and attached to the trunk. I made each branch its own gameObject because I had this idea of scripting them all to sway gently in the breeze, which I may still do but whenever I picture it now it looks like it would be unsettling, which is maybe a point for and not against. I've thought also about doing a lighter green semi-transparent globe of leaves behind each tree to give them a more traditional tree-like appearance, but I'm not sure I could really get it right.

The tree branches also presented an opportunity for varied routes through the level, something more interesting than flat planes, so I rigged up the hinged box colliders shown at the top. Now, to do this kind of platforming right, you need for the player to be able to pass through these colliders when jumping from beneath. You can't make them go to the end of each branch and then try to reach the one above, it wouldn't work. There are many ways to do this.

One important point is that each face of a collider only works in one direction. So theoretically the easiest solution would be to use, instead of box colliders, mesh colliders with a single quad sprite mesh, facing up. Player passes through from the bottom, collides on the way down.

I didn't go with this, because some early experiences and research led me to understand that single plane collision with a 3d object in a 3d environment is inherently just a bad idea. You're subject to "tunneling", where the surface area that records a hit is so narrow that your colliding object can pass through it between the high-speed snapshots your physics simulation is taking of the game state. This Unity Amswers user sums it up. 

Another solution, likely quite easy for many, would be to take the standard Unity box collider, import into Blender or what have you, remove the bottom, and export back to Unity. Custom collision mesh, very nice. Problem is I don't have any skill points in 3D modeling.

"It's free, there are tutorials, you could learn to do that much in an afternoon!" Yes but I have a series of goals here and I can't be endlessly sidetracked. Maybe my next game will involve custom 3D models but this one doesn't and I need to finish it with the tools at hand. That's the theory anyway: when everything is taking twice as long as you thought, don't go looking for hours of work to put between you and the next thing, just find a better way to solve the immediate problem. You spec the talent tree you're working with the XP you have and you play your class, you don't try to be All Things At Once.

So it turns out Unity has a feature called Layer-Based Collision Detection that's actually more or less perfect for this. The tree branches go on one layer, the player goes on another, then we ask a question, is the player moving up?
 
void CheckForPassPlatform()
{
 if (!controller.isGrounded)
 {
  jumpThisPos = transform.position;
 
  if (jumpThisPos.y > jumpLastPos.y)
  {
   Debug.Log("ignoring branches");
   Physics.IgnoreLayerCollision(10, 13, true);
  }
  else
  {
   Debug.Log("hitting branches");
   Physics.IgnoreLayerCollision(10, 13, false);
  }  
  jumpLastPos = jumpThisPos;
 }
}

I probably don't need that isGrounded really. Like I said it's more or less perfect, but the less happens becuase of my hinged collider platforms. There are occasionally cases when the player is coming down from the apex of a jump that took them halfway through the platform above. The player won't hit the thing they are already in the middle of, but if the player is also moving sideways they will bump into the next piece of collider at the hinge. The player is heading down so the collision layers are reading each other, which we would want if the player were above the collider, but since the player is to the side, we don't want this.

Well, I know there's an answer here, and I know it involves raycasting, so I've been investigating that, but I find raycasting difficult to use. The ray is invisible, so you have to draw it using Debug.DrawRay, but Debug.DrawRay takes a different set of arguments than Phsyics.Raycast, so how do you know your debug line is really the same as your raycast line?

I guess when the player is coming down from a jump, we raycast from the player's center to a point just beneath the player's feet. If there is a collision there, we allow the layers to collide, otherwise we don't. This should allow the player to pass through side collisions on the way down.

I really hope I don't get any cool ideas like this for level 3.

Tuesday, November 5, 2013

Detours


I knew I should get some work done over the weekend, but I couldn't bring myself to fire up the old platformer so, still slightly hungover from a marathon Path of Exile session, I set myself this question: what could I do in Unity to cobble together the basics of a 3/4-top-downish, click to move-or-kill-or-loot type game? I've been daydreaming of various lo-fi recombinations of Zelda DNA for a long time, and a cold grey Sunday seemed like the proper time to take a swing at it.

Predictably, I didn't get far enough to call the result a game, but it's certainly something that can be clicked around in, which you can do here if you so desire. A few notes follow on how it went down, what I learned and what might come after.

I flipped my tile grid from the platformer on its side, and was able to neatly arrange some colored quads as crude terrain. The tile map code will allow me to paint big maps easily, if that becomes something I'd want to do. A third person follow camera was as easy as creating a camera, positioning the view where I wanted it, and making that gameObject a child of the player. Snap!

The Diablo cursor that travels all over a 3d map in response to a 2d mouse position also turned out to be a lot easier than I anticipated, and doesn't even require a bunch of funky juggling of world and screen coords. All it takes is a raycast through the player follow camera. The conveniently provided Ignore Raycast Layer also saved me some trouble. The entire thing is two lines:

screenPos = Input.mousePosition;
mouseRay = this.camera.ScreenPointToRay(screenPos);

transform.LookAt... seriously? You can do this? I'm going to try to pretend all that time I wasted on older projects doing Quaternion math just never happened.

Vector3.MoveTowards is also pleasingly simple. You don't unfortunately get any easing this way, and poking around in the math library always gives me a headache, so I eventually hacked in something I could live with: if the player is within 1 or 2 units of the destination, their speed is halved. This almost looks like a lerp if you kind of squint at it. 

Some weapon experiments quickly led me to the rigidbody system. The idea of a Zel-diablo-like with real physics projectiles is intriguing, though I'm not sure it will ultimately be worth doing. My first attempts featured arrows that tumbled like poorly winged footballs, which is kind of interesting on its own.. what about a game where you start out as a completely inept archer? 

The main idea I wanted to bring to life here was a mental image of the player confronted by something like a rampaging bear, which would continue to approach as the player peppered it with arrows, but the arrows would remain sticking out of the bear. The end result looks more like a purple refrigerator, and the arrows tend not to get "captured" until they've passed entirely through the target, but some of the idea made it through. When an arrow hits the purple thing, its OnCollisionEnter function grabs the arrow's rigidbody and sets it to kinematic, which disables forces, then sets the arrow's transform as a child of the enemy. Instant porcupine effect!

There are unlimited possibilities here, beyond "bair-baiting simulator", and I hope to eventually make this into something more impressive, but I think it stands as a good example of what you can do by stumbling around in Unity for a few hours with a vague idea or two. Time to shelve it and return, slightly refreshed, to the primary project.  

Sunday, October 13, 2013

The Hard Truths of Hardware


Here, on this rocky outcropping, where starving lizard-birds circle above, we shall call an end to the mobile development branch of this project. The controls work well enough for what they are, and you can beat the first level, although the experience leaves a lot to be desired.

Merging the touch controls into the platformer project was educational in a bunch of ways.  First I learned about preprocessor directives, which allowed me to maintain a single version of the project with the same scripts for all platforms. Unity has some cool custom options here, so I was able to create #if MOBILE to enable touch controls for any arbitrary device.

I also found some ways to improve the existing code, for example when calculating movement I was using Input.GetAxis(“Horizontal”) in every expression, so it was getting called half a dozen times every update. I changed it to call that once at the top of the update, store it in a float, and use that float to do calculations for the rest of the tick. I’m willing to bet that on a modern machine this makes exactly zero difference, but it’s the principle of the thing.

I did find that I had made a few fundamentally good decisions. In the character control script I get input in one place, then immediately abstract it into values like float “velocity” and bool “attackInProgress” and use those to actually do stuff. This made adding a new input scheme really easy because I only had to touch one area of the code to get those initial values right.

The real education, though, was getting under the hood of what it really means to have two buttons in combination with a d-pad style arrow option. The first problem I had was that running (hold green button and arrow) wouldn't work, and I eventually saw that since I had configured both buttons as tap inputs, the game didn't know or care if the green button was being held down. The WebPlayer version reads keyboard keys with both GetButton and GetButtonDown, and has different responses to each, so I needed to expand the touch controls example script to handle both taps and holds for each button. I’ll paste the results at the bottom of this entry.  It took a lot of trial and error and still has some big gaps in functionality. This is apparently a harder problem than I originally thought.

The problem is bigger than just input logic. For example, the run/jump: I found in testing that regardless of how well it worked, the actual gesture the run/jump requires (depressing both green and yellow simultaneously) is awkward at best, I found myself bringing my hand around the device to use forefinger and middle finger, which made my other hand harder to use and just felt wrong.

The run/jump came about because, when messing with basic character movement in the WebPlayer version, I unconsciously knew that there’s nothing more natural on a keyboard than augmenting a letter key with a simultaneous shift key. We sometimes do it Multiple Times in a Sentence. Transpose that motion to a touch screen, and it’s suddenly far less natural. I finally understand how the original NES controller, as clunky as it may look now, was an engineering marvel, affording a wonderfully tactile responsiveness between tap and hold combinations on those two nubs of cheap plastic. Multiple multi-million-dollar franchises owe their existence to the joy that developers were able to wring from that dance of tap-hold, discrete-continuous, now this and that, now that and not this, now both at once, many times a second, and that’s just the player’s right thumb! But I digress.

“Transpose” is an apt word, “translate” also has some truth to it. A flat surface is not a controller, and any control scheme change involves reaching across a conceptual gap, trying to say something that’s not there to say, forgetting how many stairs there are and coming down hard on the landing.

The decision of how far to try and take this was made for me, thank goodness, by a hardware limitation. My pokey old smartphone can only handle two simultaneous touch inputs. That means that if you are running, you can’t jump. This breaks the run/jump, but like I said, executing it isn't much fun in the first place. When I hit that wall, I stepped back for a better look at the project. What was I going for here? This was originally and essentially still is a portfolio piece, playable on a web site, and adding an optional downloadable .apk was just an additional bit of frosting. You’d still need a 3rd party .apk installer to use it, and I don't really expect anyone to do so. It was more a proof of concept to say that I could do that work as well, and I think I've delivered that. I have a little more knowledge and a few more ideas for new projects. I’m very interested in the idea of mobile indie games, and now that world doesn't seem quite so foreign or daunting. I’m happy I took the time, and happier still to be shutting that tangent down in order to wrap the main game up once and for all.
using UnityEngine;
using System.Collections;

public class touchControls : MonoBehaviour {
#if MOBILE
 
 //assign 128px textures in inspector
 public Texture leftArrowTex;
 public Texture rightArrowTex;
 public Texture greenButtonTex;
 public Texture yellowButtonTex;
 
 //consumable button state values
 public bool leftFlag
 {
  get{return left;}
 }
 public bool rightFlag
 {
  get{return right;}
 }
 public bool greenTapFlag
 {
  get{return greenTap;}
 }
 public bool greenHoldFlag
 {
  get{return greenHold;}
 }
 public bool yellowTapFlag
 {
  get{return yellowTap;}
 }
 public bool yellowHoldFlag
 {
  get{return yellowHold;}
 }
 private bool left;
 private bool right;
 private bool greenTap;
 private bool greenHold;
 private bool yellowTap;
 private bool yellowHold;
 
 private Rect[] Arrows;
 private Rect[] Buttons;
 private Rect leftArrow;
 private Rect rightArrow;
 private Rect greenButton;
 private Rect yellowButton;

 private float sw; //screen width
 private float sh; //screen height
 private float bu; //boxUnit, default box measurement
 private float au; //arrowUnit, default arrow measurement 
 private Vector3 touchPos; //touch input gives us this
 private Vector2 screenPos; //gui rects need this
 
 private string debugStr;
 private WafhPlayer playscr;
 private int tCount;
 
 void Start () {
  
  sw = Screen.width;
  sh = Screen.height;
  bu = 256;
  au = 128; 

  leftArrow = new Rect(0, sh-au, au, au);
  rightArrow = new Rect(au, sh-au, au, au);
  greenButton = new Rect(sw-(au*2), sh-au, au, au);  
  yellowButton = new Rect(sw-au, sh-au, au, au);
  
  Arrows = new Rect[]{leftArrow, rightArrow};
  Buttons = new Rect[]{greenButton, yellowButton};
    
  playscr = GameObject.FindWithTag("Player").GetComponent();
  
  debugStr = "LEFT =" + leftFlag + "\nRIGHT =" + rightFlag
   + "\nGREENTAP = " + greenTapFlag + "\nGREENHOLD = " + greenHoldFlag 
    + "\nYELLOWTAP =" + yellowTapFlag + "\nYELLOWHOLD = " + yellowHoldFlag
    +"\nhaxis = " + playscr.pHaxis + "\nisRunning = " + playscr.isRunning
    + "\nTCOUNT = " + tCount;

 }

 void Update () {
  tCount = Input.touchCount;
  HandleArrows();
  HandleButtons();
  debugStr =" LEFT =" + leftFlag + "\nRIGHT =" + rightFlag
   + "\nGREENTAP = " + greenTapFlag + "\nGREENHOLD = " + greenHoldFlag 
    + "\nYELLOWTAP =" + yellowTapFlag + "\nYELLOWHOLD = " + yellowHoldFlag
    +"\nhaxis = " + playscr.pHaxis + "\nisRunning = " + playscr.isRunning
    + "\nTCOUNT = " + tCount;
 }
 
 void OnGUI () {
  GUI.Box(new Rect(sw/2-(bu/2), 0, bu, bu), debugStr); 
  GUI.Box (leftArrow, leftArrowTex);
  GUI.Box (rightArrow, rightArrowTex);
  GUI.Box(greenButton, greenButtonTex);
  GUI.Box(yellowButton, yellowButtonTex);  
 }
 
 void HandleArrows()
 {
  if (Input.touchCount > 0)
  {
   foreach (Rect rect in Arrows)
   {
    foreach (Touch touch in Input.touches)
    {
     touchPos = touch.position;
     screenPos = new Vector2(touchPos.x, sh-touchPos.y);
     if(rect.Contains(screenPos))
     {
      if (rect == leftArrow)
      {
       if (touch.phase == TouchPhase.Ended)
       {
        left = false;
       }
       else
       {
        left = true;
       }
      }
      if (rect == rightArrow)
      {
       if (touch.phase == TouchPhase.Ended)
       {
        right = false;
       }
       else
       {
        right = true;
       }       
      }
     }
     else
     {
      if (ButtonlessTouch(screenPos))
      {
       if (rect == leftArrow)
       {
        left = false;
       } 
       if (rect == rightArrow)
       {
        right = false;
       } 
      }
     }
    }
   }
  }
  else //no touches recorded this update
  {
   left = false;
   right = false;
  }
 }
 void HandleButtons()
 {
  if (Input.touchCount > 0)
  {
   foreach (Rect rect in Buttons)
   {
    foreach (Touch touch in Input.touches)
    {
     touchPos = touch.position;
     screenPos = new Vector2(touchPos.x, sh-touchPos.y);
     if(rect.Contains(screenPos))
     {
      if (rect == greenButton)
      {
       if (touch.phase == TouchPhase.Ended)
       {
        greenHold = false;
       }
       else
       {
             
        greenHold = true;
        if (touch.phase != TouchPhase.Began)
        {
         greenTap = false;
        }
        else
        {
         greenTap = true;
        }
       }
       
      }
      if (rect == yellowButton)
      {
       if (touch.phase == TouchPhase.Ended)
       {
        yellowHold = false;
       }
       else
       {
        yellowHold = true;
        if (touch.phase == TouchPhase.Began)
        {
         yellowTap = true;
        }
        else
        {
         yellowTap = false;
        }
       }       
      }      
     }
     else
     {
      if (ArrowlessTouch(screenPos))
      {
       if (rect == greenButton)
       {
        greenTap = false;
        greenHold = false;
       }
      
       if (rect == yellowButton)
       {
        yellowTap = false;
        yellowHold = false;
       }
      }
     }
    }
   }
  }
  else //no touches recorded this update
  {
   greenTap = false;
   greenHold = false;
   yellowTap = false;
   yellowHold = false;
  }
 }
 
 bool ButtonlessTouch(Vector2 screenPos)
 {
  bool buttonless = false;
  if  
    (!greenButton.Contains(screenPos) &&
    !yellowButton.Contains(screenPos))
  {
   buttonless = true;
  }
  return buttonless;   
 }
 
 bool ArrowlessTouch(Vector2 screenPos)
 {
  bool arrowless = false;
  if
   (!leftArrow.Contains(screenPos) &&
    !rightArrow.Contains(screenPos))
  {
   arrowless = true;
  }
  return arrowless;
 }
 
 
 
#endif
}




Saturday, October 5, 2013

Not So Fast Buddy

After posting my touch controls script I was fooling around with it and of course found a huge bug. If you're holding down one of the arrows, and then tap one of the colored buttons, the cube will stop moving. The culpit was the code that set the arrow flags false on touches received outside those flags. This was required to support the fairly standard use case of dragging a finger from one arrow to the other; the first arrow needed to flip off in that case.

The solution I went with was to do another layer of checking on any touch outide the rect we're currently looking at, and if this other touch is on a button, we can ignore it. This way we'll turn arrows off both if you drag a finger into the other arrow, and if you drag that finger out of the arrow area entirely. The only downside here is that a random tap, near the top of the screen for instance, will make both arrows false. But, I might reasonably ask, what were you doing tapping up there anyway, that's not where the controls are... the menu UI elements are already responding OK to touch, so hopefully this won't break those. I guess we'll find out. Anyway, the extra code is here:

bool OrphanTouch(Vector2 screenPos)
{

bool isOrphan = false;
if 
 //don't count the other arrows for this
 (!greenButton.Contains(screenPos) &&
 !yellowButton.Contains(screenPos))
{
isOrphan = true;
}
return isOrphan;
}

Wednesday, October 2, 2013

A Touching Story...


I eventually had to throw away my feverish rant about Cookie Clicker, as I felt it contained spoilers for both the video game industry and life Itself, so instead here's what I've been up to Unity-wise these past weeks.

With a few desultory mouse-clicks, I had ported my web-targeted 2d platformer to Android. Well, the appearance of it anyhow. It was still only set up to take keyboard input. No worries though, since the first step was such a breeze, implementing a simple set of onscreen controls should be a similar walk in the park, yes?

Well the first problem, when the touch input examples I copied didn't work, was that I was staring at a phone, one that wasn't responding to input, and I had no access to a debug console. That had to change first thing. 

I spent a good many hours trying to rig up live debugging with various IDEs, sdk versions, phone drivers, and even a third party Android phone Windows driver updater, which informed me that my phone's OS was "not compatible with Windows 8 x64" and since that's what they had at Fry's, even after I asked the guy to look in the back for any Windows 7 machines he might have overlooked, that's what we're rolling with.

After a while Eclipse deigned to recognize my HTC Incredible, which I guess isn't all that incredible anymore, and I started hacking away at it. How could I set up a control scheme that would slot easily into the existing movement controls of the game I was already making? A quick check showed I was moving the player with Input.GetAxis("Horizontal") multiplied by some arbitrary speed value, so I was looking to replicate, with these two touchable arrows, a continuum between -1 and 1, which is what GetAxis("Horizontal") gives you.

So I pictured a finely grained number line clamped at -1 and 1, and set about adding or subtracting tiny numbers to and from it, with lots of little special cases like snapping it to zero when you switch directions, and letting the value drift back toward zero when not being touched. I eventually had to start learning about touch phases, but that took a while to sink in.

My first pass was full of very long functions full of if trees calling for specific global variables not mentioned elsewhere in the function, just a god awful mess, pure flailing.

Second pass was attempting to extend the Rect class as a custom UI object (lol why I dunno). This lead me to learn what Sealed means, and got a slightly better understanding of "the Unity Way", which involves hanging scripts and components on GameObjects like baubles on a Xmas tree, so I was going about the whole thing backwards. The problem with this was that the Rects in which I was drawing the textures only existed as concepts, created by a few OnGUI.Box calls, nothing you could click on in the Inspector. The Xmas tree only gets put up at runtime, so anything you want to hang on it also has to be defined and described in script.

Eventually I had painted myself into a corner. I'm finding as this goes on that one of the important skills to master (and boy can this ever be generalized) is understanding when you're at the end of a first draft and can, not set aside or abandon that work, but tape it up to the wall or throw it onto another monitor and refer to it when building your second draft, trying to incorporate its strengths while paring out its flaws. Iteration.

I started over and thought in bigger picture terms about what the thing was trying to do. I remembered one of the lessons from a videos I watched: when you're trying to perform a complicated operation, if you spend the right amount of time and effort figuring out what your individual functions should each be and do, the actual guts of any of them turn out to be not so complicated. Organization. 

Unfortunately the third attempt quickly accumulated a bunch of variables and calculations that rightly should belong elsewhere, so eventually it had to be scrapped as well. I approached the fourth draft with a flinty-eyed stare. There would be four public bools, readable but not writeable by any and every class who cared to use them. Each bool would correspond to one UI button, and the bool would indicate whether right now, at this moment, that button is receiving something that we want to interpret as a touch. All this class does is sort through all the touches by phase and turn those bools on and off as needed. Here it is:


using UnityEngine;
using System.Collections;

public class touchControls : MonoBehaviour {
 
 //assign 128px textures in inspector
 public Texture leftArrowTex;
 public Texture rightArrowTex;
 public Texture greenButtonTex;
 public Texture yellowButtonTex;
 
 //consumable button state values
 public bool leftFlag
 {
  get{return left;}
 }
 public bool rightFlag
 {
  get{return right;}
 }
 public bool greenFlag
 {
  get{return green;}
 }
 public bool yellowFlag
 {
  get{return yellow;}
 }
 private bool left;
 private bool right;
 private bool green;
 private bool yellow;
 
 private Rect[] Arrows;
 private Rect[] Buttons;
 private Rect leftArrow;
 private Rect rightArrow;
 private Rect greenButton;
 private Rect yellowButton;

 private float sw; //screen width
 private float sh; //screen height
 private float bu; //boxUnit, default box measurement
 private float au; //arrowUnit, default arrow measurement 
 private Vector3 touchPos; //touch input gives us this
 private Vector2 screenPos; //gui rects need this
 
 private string debugStr;

 void Start () {
  
  sw = Screen.width;
  sh = Screen.height;
  bu = 256;
  au = 128; 

  leftArrow = new Rect(0, sh-au, au, au);
  rightArrow = new Rect(au, sh-au, au, au);
  greenButton = new Rect(sw-(au*2), sh-au, au, au);  
  yellowButton = new Rect(sw-au, sh-au, au, au);
  
  Arrows = new Rect[]{leftArrow, rightArrow};
  Buttons = new Rect[]{greenButton, yellowButton};
   
  debugStr = "LEFT =" + leftFlag + "\nRIGHT =" + rightFlag
   + "\nGREEN =" + greenFlag + "\nYELLOW =" + yellowFlag;

 }

 void Update () {
  HandleArrows();
  HandleButtons();
  debugStr = "LEFT =" + leftFlag + "\nRIGHT =" + rightFlag
   + "\nGREEN =" + greenFlag + "\nYELLOW =" + yellowFlag;
 }
 void OnGUI () {
  GUI.Box(new Rect(sw/2-(bu/2), sh/2-(bu/2), bu, bu), debugStr); 
  GUI.Box (leftArrow, leftArrowTex);
  GUI.Box (rightArrow, rightArrowTex);
  GUI.Box(greenButton, greenButtonTex);
  GUI.Box(yellowButton, yellowButtonTex);  
 }
 void HandleArrows()
 {
  if (Input.touchCount > 0)
  {
   foreach (Rect rect in Arrows)
   {
    foreach (Touch touch in Input.touches)
    {
     touchPos = touch.position;
     screenPos = new Vector2(touchPos.x, sh-touchPos.y);
     if(rect.Contains(screenPos))
     {
      if (rect == leftArrow)
      {
       if (touch.phase == TouchPhase.Ended)
       {
        left = false;
       }
       else
       {
        left = true;
       }
      }
      if (rect == rightArrow)
      {
       if (touch.phase == TouchPhase.Ended)
       {
        right = false;
       }
       else
       {
        right = true;
       }       
      }
     }
     else //touches recorded outside this rect turn it off
     {
      if (rect == leftArrow)
      {
       left = false;
      }
      if (rect == rightArrow)
      {
       right = false;
      }
      
     }
    }
   }
  }
 }
 void HandleButtons()
 {
  if (Input.touchCount > 0)
  {
   foreach (Rect rect in Buttons)
   {
    foreach (Touch touch in Input.touches)
    {
     touchPos = touch.position;
     screenPos = new Vector2(touchPos.x, sh-touchPos.y);
     if(rect.Contains(screenPos))
     {
      if (rect == greenButton)
      {
       if (touch.phase == TouchPhase.Began)
       {
        green = true;
       }
       else
       {
        green = false;
       }
      }
      if (rect == yellowButton)
      {
       if (touch.phase == TouchPhase.Began)
       {
        yellow = true;
       }
       else
       {
        yellow = false;
       }       
      }      
     } 
    }
   }
  }  
 }
}


the concept behind the two main sort of mirrored functions is that we want to record touches on the arrows if we are in any phase except TouchPhases.Ended, and we want to record touches on the buttons only if we are in TouchPhases.Began. This gives us the desired behavior where you can press and hold the arrows for continuous input, but the buttons must be tapped.

Here's the script on the cube in the middle of the screen that moves and shoots smaller cubes, as an example of how the flags are used. In both cases of course you'd have to set up your assets in the inspector.


using UnityEngine;
using System.Collections;

public class moveCube : MonoBehaviour {
 
 private touchControls hud;
 private Vector3 cubeUpdate;
 private int vroom;
 public GameObject zapFab;
 private GameObject zapClone;
 private float nextCubex;
 private float recoilTimer;
 private float recoilCap;
 
 private float haxis;  //horizontal axis buckent
 private float haxunit; //unit for altering teh bucketprivate bool justShotFlag;
 
 
 // Use this for initialization
 void Start () {
  hud = GameObject.Find("Main Camera").GetComponent<touchControls>();
  vroom = 50;
  //justShotFlag = false;
  haxunit = 0.1f;
 }
 
 // Update is called once per frame
 void Update () {
  haxis = Round(haxis);

  if (hud.greenFlag){
    OnGreen();
  }
  
  if (hud.leftFlag)
  {
    if (haxis > 0){ haxis = 0; }
    haxis -= haxunit;   
  }
  if (hud.rightFlag)
  {
    if (haxis < 0){ haxis = 0; }
    haxis += haxunit;   
  }
  if (!hud.rightFlag && ! hud.leftFlag){
    if (haxis > 0) {haxis -= (haxunit);}
    if (haxis < 0) {haxis += (haxunit);}
  }
  //clamp haxis at ends
  if (haxis > 1.0f){haxis = 1.0f;}
  if (haxis < -1.0f){haxis = -1.0f;}
  //clamp haxis to zero at very close values to avoid stutters
  if (haxis > 0 && haxis < 0.001f){ haxis = 0;}
  if (haxis < 0 && haxis > -0.001f){ haxis = 0;}  
  //move cube
  nextCubex = transform.position.x + haxis*vroom;
  if (nextCubex > 486.0f){ nextCubex = 486.0f;}
  if (nextCubex < -486.0f){ nextCubex = -486.0f;}
  cubeUpdate = new Vector3(nextCubex, transform.position.y, transform.position.z);
  transform.position = cubeUpdate;  
 }
 void OnGreen()
 {
  zapClone = Instantiate(zapFab, transform.position, transform.rotation) as GameObject;
 }
 //helpas
 float Round(float num)
 {
  float rum;
  rum = Mathf.Round(num * 1000)/1000;
  return rum;
 }
}




OK, now we have a crude sort of Zaxxon clone, with no enemies, but we've demonstrated the touch input setup, so we're ready to integrate this with our platformer, right? We can set this demo aside, correct? Damn, I forgot how fun Zaxxon is... even without any bad guys it's kind of awesome... I mean how long would it take, really, to plug in a few columns that explode when you shoot them, maybe an enemy or two... in fact, I think I may have some innovative ideas that could advance the whole Zaxxon-like genre... I could always jump back to the platformer any time... This is like that part in the novel writing process where you start getting all these amazing ideas for other novels. Discpline, they say, is its own reward. We'll see. 

Monday, September 2, 2013

Scaly


Not much to report lately, I've been down in the weeds working on making all the UI elements scale and stretch properly for all possible aspect ratios, and it's been an utter pain, as I expected it would be. Nothing against Unity here, this is just one of those less-than-fun tasks that didn't seem necessary a while ago. Now that I plan to "support" mobile platforms, at least in a really basic way, this is just part of that process.

The work is not brain-busting so much as tedious. I pondered various methods and here's what I came up with: pick one square UI element (the player head on the left side). Calculate what the length of one side of it would be in relation to the height of the screen in your standard aspect ratio, the one you already like the look of. In my case this would be a 64 pixel square on a 600 pixel high screen, giving me a basic value for "headLength" of screenHeight * 0.106. I then use that headLength variable to lay out all the rest of the elements. The health bars are maybe a quarter of a headLength wide, separated by a gap of 1/16 headLength, etc. For the bus passes I start at the value of screenWidth and walk backwards across the screen to the left by various fractions of a headLength to get things placed and spaced properly. This way, when the game is switched to a 5:4 or 16:10 aspect ratio, the game gets the current screen width, calculates how long our new solomonic foot of headLength will be, then draws everything in proportion onscreen. You end up with versions of the UI that look a little different each time, but are close enough that the basic layout works across platforms. I have no idea whether this is a standard kind of method or a wacky backwards solution, and most of the Unity UI guides I have read are somewhat beyond my grasp so I just went with something that worked for this project, as I tend to do.

It is, if nothing else, a step up from the previous solution that I though was so elegant: arrange the UI objects in the scene view, far away from everything else, then create a UI-only camera that views them and layers that view on top of the game. This is still a cool strategy (I saw a talk by the Snuggletruck guys where they described something like this) but I'll be damned if I can get it to scale the way I need it to. In my version, the UI elements don't exist at all in scene view, they are only a collection of textures in a folder that are assembled and brought to life every tick with an OnGUI call.

That "every tick" is a big warning flag of course, and I've heard various rumblings about OnGUI being "expensive", in the sense that it's going to murder my framerate. It's true already that my mobile version is significantly slower than my webplayer version, but at this point I am at least savvy enough to beware that specter on the battlements pointing a bony finger at me and whispering "optimize prematurely!" That guy can go soak his head, he's not my real dad.

Tuesday, August 20, 2013

Square 1.2


As my old PC lay dying, I had the foresight to upload my Unity project files to a server where I could retrieve them later. A month or two passed.

I got a new PC, downloaded and installed Unity, downloaded my game project from the server where I'd stored it, and fired the whole thing up. I had lost all references.

All references. Materials no longer referred to textures. Prefabs for players and props no longer referred to the scripts that gave them motion and logic ("empty monobehavior" said the inspector). Core game objects no longer displayed UI, or tracked time, or responded to events. Everything was bright pink. Not one stone remained atop another.

If there was a way to protect against this, I still don't know what it was. The good news is that all my animations and calculations and pixel-nudges, all that information was still there. Not a single asset was lost. All the transfer did was sever every strand of the delicate spiderweb that held it all together.

Thoughts I had, working on Humpty Dumpty:

"I wonder if my increased ability to resolve common errors is less true learning and more memories of a series of brute force tricks used against the problem previously. It's not really like I understand computers a whole lot better than I used to."

"Once again I have lost work due to making changes while the game is running, and forgetting that when the game stops running all those changes go away. You would think I would remember that after a while. Unity's greatest prototyping asset is my Achilles heel."

"It's continuously weird how computer processes can echo the natural world, at least in the way things decay. Somewhere in this process something went wrong with some sprite sheets (import settings? Still don't know) and now there are blotches of discoloration on the player sprites, looking for all the world as though I'd moved them in cardboard boxes which got rained on."

"I wonder there's a difference between true learning and memories of a series of brute force tricks used against the problem previously."

"I wonder (and this is uncharitable and tinfoil-hatty I admit) if all these broken references are an unnecessary, deliberate feature, supported in order to drive users into the arms of Unity's for-pay version control system. Like the performance monitor, it's one of those things you won't notice isn't free until you're invested to a point where it's painful to live without." (Or maybe, says a contrary voice, you lack the skills to control the integrity of your own file systems! This is the curse of the semi-educated techie: we never know for sure whether it's all our own fault, but by some definition it almost always is).

"I should just chuck this and go make some simple games that use squares and circles or whatever and then I wouldn't have these kinds of problems."

"Perhaps this pain will finally make real to me those abstract concepts of OOP hygiene like encapsulation and abstraction. The player holds a reference to the hud for updating the score. Pick ups hold a reference to the scene manager object so they can flips the scene state when enough are collected. I know that somehow this is all wrong, and the way I know is that I can't get anything back online in isolation. It's like a beach-ball sized knot of Christmas tree lights with a few bad bulbs."

"Huh, the Unity folks released their Android support features for free? That shit was four hundred dollars! I wonder what it would take to put this thing on my phone?"

Not much, as it turns out. Sure, there are no onscreen controls so nothing actually works, but just the fact of this game I've been fooling around with suddenly being available in my hand is enough to give this project a much-needed adrenaline boost. This is actually kind of cool, and I applaud Unity for making it so easy. Still a little touchy about the broken references, but let's just move past it.

Sunday, June 30, 2013

Crash

The great borrowed warhorse desktop PC first slowed, then froze, then constantly needed a restart, then failed to start at all. Could be anything from a bad C drive to the CPU finally failing under the first real heat of the year.

I'm back on my trusty laptop, and Unity project progress will have to hang until I can get a decent replacement. I was in the middle of saving for something else so I'm not inclined to rush down to Fry's and take care of the problem immediately.

I'll continue to post here about other projects in the meantime, I'm sure I'll keep something going.

Sunday, June 9, 2013

One Thing Leads To Another


I've managed to completely avoid my backlog for a couple of weeks, due to some combination of
  • 60 hr work weeks 
  • Dark Souls
  • Arrested Development
  • making Counter-Strike maps
  • working on music
  • teaching myself to use Twine
  • helping a friend with a screenplay
  • "social life" 
and whatever sleep I could fit in there. Today I made myself sit down and crank out some tasks, and I ended up wrapping the sprint, plus something interesting happened.

I had a long running note on my list that the player was too robust, too hard to kill. One easy improvement was to drop the number of lives from 4 to 3, but that didn't really do it. I had set the player up with a sort of health bar system, and no enemy was likely to bump into a player more than once, so you had to kind of deliberately let enemies hit you in order to even test the health system. It looked like one of those tasks I might have to shrug at, and compensate by making the game harder in later levels.

At the same time I was finally getting around to drawing three different bullet types for the three playable characters. The bullet, as I may have mentioned, was a total freebie, all I did was draw a very small texture on it and then I scale the object up as it moves, bounce it off things, and it looks moderately OK, at least in context with the other graphics.

When I had plugged these in and was messing around in-game to see how they looked (I eventually ended up darkening the background tile spritesheet so they would stand out more), I was brought uneasily back to another design problem from earlier in the project: I had this sort of neat feature of sound-wave like bullets bouncing off things and each other, but apart from the fun visual you get when you spam a bunch of these, there's not really any point to it, and the player can just wander around the map spewing these cones of death that obliterate any enemy in sight. I had turned over the idea of limiting the player's shot frequency or introducing some kind of "energy" mechanic that shots would deplete, but that all felt pretty half-assed and I was loathe to go there.

Now, these two problems came together in my mind and I impulsively made two changes: I added the player bullet to the list of things that would damage the player, and I made it so that the player would lose their weapon when damaged, rather than when killed.

Suddenly, I had a different game. The character could still greet every threat with a massive fusillade of sound waves, but when those waves bounced back the player could get hit, which would remove their weapon. The player can pick up another weapon, but that means finding the right platform and making it there without being touched by an enemy, and doing so will eat up precious time ... suddenly there is some gameplay where there wasn't any before. It's not the most compelling experience in the world, sure, but now there's something to do, there are risks, there are potential strategies even.

There are two things that I intend to take away from this:
  • You don't have to solve every problem right now. If I had forced my way to a solution to the "player too robust" issue earlier, I might not have had the opportunity to try out this new mechanic. Because I trusted I'd be able to improve it eventually, and kept my mind open, I was able to recognize a solution when it presented itself
  • You do have to keep all the various aspects of your project in mind, as much of the time as you can. This "problems solving each other" thing only happens when you are simultaneously down in the weeds and looking around at the bigger picture. It's easy to put blinders on and just chip away at a single feature until its finished, but it might make more sense, and produce better results, to see all features as co-dependent, and allow changes in one to suggest possible refinements to any or all of the others. 


Monday, May 27, 2013

Lerps of Faith


I've been stuck in a test level for weeks - weeks! I finally got something together that I'm willing to live with, but I still don't like it much. The simple-sounding problem was that I needed, in various places, to move an object from point A to point B, in some cases speeding up as it leaves A, in other cases slowing down as it approaches B. Thank Christ I didn't need both at the same time or I'd still be working on it.

Several game development environments support this sort of thing natively with easing functions, but Unity is apparently not among them. Seems like an odd thing to leave out, but the Unity philosophy seems to be that if people want something badly enough, some user will eventually code it up and offer it to everyone else, possibly for a profit, and for this use case one need look no further than iTween, or if one's pockets are empty one could conceivably go to the community and base a solution on something like MathFx.

Granted, there's also the option of hooking my objects up to Unity's animation system, but that felt like using a bazooka on a mosquito, and I'm not sure the kind of easing I want is easy to get to. I had a hunch that I could probably get close enough to what I needed without anything so complex. Turns out I was sort of right, the Lerp function took care of me on the slowing down side, but the speeding up side took a little more head-scratching. Below is what I came up with, you'd attach it to a cube or whatever and then flip the bool to switch between modes.

This was one of those weird little bottleneck problems that is pretty unimportant in the grand scheme of things, but somehow had the power to bring my project to a grinding halt because it felt like it ought to be easy, and not being able to solve it had cascading negative effects on my confidence and motivation. This solution still looks a little weird, but no weirder than anything else in my project, so I'm calling it good.

using UnityEngine;
using System.Collections;

public class test_smoothlerp : MonoBehaviour {

GameObject cube;
Vector3 beginPoint;
Vector3 endPoint;
float startTime;
float tripDist;
float acc;
bool speedUp = true;

void Start () {
cube = GameObject.Find("Cube");
print ("cube = " + cube);


tripDist = 150.0f;

beginPoint = cube.transform.position;
print ("beginPoint = " + beginPoint);
endPoint = new Vector3(cube.transform.position.x + tripDist, cube.transform.position.y,
cube.transform.position.z);
print ("endPoint = " + endPoint);

acc = 0.01f;
}


void Update () {
if(speedUp)
{
//speed up
if (acc < 1.0f)
{
acc += 0.01f;
}
Vector3 nextPos = new Vector3(cube.transform.position.x + (tripDist*acc), 0.0f, 0.0f);
print (nextPos);
if (nextPos.x < endPoint.x)
{
cube.transform.position = Vector3.Lerp(beginPoint, nextPos, Time.time);
}
else
{
print ("done");
}
}
else
{
//slow down
cube.transform.position = Vector3.Lerp(cube.transform.position, endPoint, Time.time);
}
}
}

Sunday, May 5, 2013

Float Downstream



Fear - fear gripped my heart in its clammy fist. After making a new WebPlayer build to replace the somewhat outdated one on my webpage, I fired it up and found that it didn't work. I realized I hadn't tried playing the game in an external build for quite some time. The game worked when played in the Unity Editor's play mode, and that should just be exactly the same as a built version, right? RIGHT??

The game's main menu and auxiliary screens seemed to work fine, but when starting a new game, the player would be stuck at the character input screen, as no amount of clicking would actually select a character and launch the first level. This worked perfectly fine in the editor. The setup is, a UI script that lives on the Main Camera in this scene Instantiates all three of the player prefabs at positions based on the screen size, and hangs a scipt called UI_dummyPlayer on each of them. That script just has an OnMouseDown event that fires whenever someone clicks on the GameObject that the script lives on, loading the next level with the appropriate character as the player.

A platform inconsistency is by nature a painful problem. I started looking for diagnostic tools. The first was the fact that you can right-click on your game in the WebPlayer and bring up a crude development console, pictured above, which in my case read

MethodAccessException: Attempt to access a private/protected method failed.

and helpfully pointed me to the very function causing the problem. The two potential culprits in there are the DontDestroyOnLoad call and my XML Serialization stuff... I started to get a sinking feeling. I quickly narrowed the problem down to these two lines:

seattle = City.Load(Path.Combine(Application.dataPath, "City.xml"));
seattle.Save(Path.Combine(Application.dataPath, "seattle.xml"));


and with some searching online I started to piece things together. Unity's WebPlayer has some built in security features that stop you from reaching into code that is stored in certain ways. I did a quick test on Path.Combine alone, as I hadn't used that before either, but the WebPlayer was fine with it. It was balking at my City.Load function, which I talked about in a previous post. It's another case of using some code I don't understand well, but I understand well enough that the WebPlayer is blocking me from access to the game's underlying file structure, which would most likely disallow things like

using (var stream = new FileStream(path, FileMode.Open))

and there are plenty of threads online about this and they all die out pretty quickly when a senior user steps in to say No, you can't do this in WebPlayer, it would in fact be a glaringly dangerous security flaw if you could.

One post suggested a potential solution: if I stored my xml files in the same directory as my game on the website, I could use Unity's WWW functions to read that xml into a Unity object without touching the internal filesystem. The effect would be the same. Unfortunately, the example code for this employed XmlReader, where I was using XmlSerializer. Also, the process outlined involved junking all of my Xml code, and one of the other posts I read seemed pretty confident that I could still use the XmlSerializer class from within WebPlayer. If the FileStream was indeed the problem I might be able to keep most of my code.

The breakthrough finally came as a result of this postwhere the suggestion was to use XmlSerializer in connection with Resources.Load and something called a TextAsset to bypass use of the prohibited FileStream class. Other exotic tools in play include MemoryStream and Encoding.UTF8 ... we're somehow simulating a streaming file operation within the game's runtime memory, which I find baffling and kind of magical. We have replaced this:

var serializer = new XmlSerializer(typeof(City));
using (var stream = new FileStream(path, FileMode.Open))
{
  return serializer.Deserialize(stream) as City;
  stream.Close();
}

with this:

City loadedCity = null;
TextAsset myAsset = (TextAsset)Resources.Load(fileName, typeof(TextAsset));
byte[] bytes = Encoding.UTF8.GetBytes(myAsset.text);
using (MemoryStream stream = new MemoryStream(bytes))
{
  loadedCity = (City)(new XmlSerializer(typeof(City))).Deserialize(stream);
}
return loadedCity;


and lo and behold, the version built for the webplayer runs without error, characters are selectable, xml is deserializable, the sun is shining, and all is right with the world. I'm taking the rest of the day off.

Thursday, May 2, 2013

Orange You Happy Now



Everything was going fine until I hit the color orange.

A little background here. In order to complete a level, the player needs to pick up five bus passes. Each bus pass has associated with it two particle effects, one a rising stream of particles for the at-rest state, as pictured above, and one a sort of blooming effect on the player when the thing is picked up. Both of these are the same color as the bus pass.

The particle systems are attached to GameObjects which are then stored as prefabs. The bus pass itself is placed in the scene view with a pickup script on it, and I toggle which color this particular pass will be via the Inspector, as the list of bus pass colors is an enum within that script. When the game starts and the bus pass script runs, it looks at that variable to find out what color it ought to be, then switches to the appropriate frame of its' RagePixel sprite. Each bus pass uses the same sprite, which just has five frames of the same image in different colors.

I made the ambient particles first, and I just set the color of each directly in the particle system before making the prefab, so I ended up with five different ones like "prefab_particle_buspass_orange" and "prefab_particle_buspass_green". Each prefab was assigned to its corresponding bus pass through an inspector variable, and each pass Instantiated that prefab during its Start function. Doing it that way once was, I felt, an acceptable level of laziness, but when it came time to do the bloom effect, I decided I had better just make one, "prefab_particle_buspass_pickup_any", and then programatically change the color. After all, each bus pass knew its own color, so it should be easy enough to use a blank white material and just throw RGB colors at it at runtime.

My first "huh" moment was getting the message "UnityEngine.Color does not contain a definition for 'purple'." I must be spoiled by Unity's easy access to so many MSDN libraries, but I figured a robust suite of crayola colors in the API would be more or less de rigueur. Not the case this time, but no big deal. Since I'm using C# and Color is a Struct like Vector3, I have to remember to replace

Color purple(128.0f, 0.0f, 128.0f, 1.0f);

with

Color purple = new Color(128.0f, 0.0f, 128.0f, 1.0f);

but again, all is well, and my purple looks purple in-game! Well, a little pink but whatever. Let's try a classic Cadmium Orange:

Color orange = new Color(255.0f, 97.0f, 3.0f, 1.0f);

This comes out yellow.
Undeniably, unsquint-at-ably, absolutely damned yellow.
I thought for a while, and I spent maybe a bit too much time at Wikipedia's unusually fascinating page for the color orangeand I thought some more, and eventually I did the right thing, which is to resort to googling ever more plaintive rephrasings of one's problem and clicking on Unity Answers links until something gives.

...No, I'm kidding, the right thing to do is check the documentation, which in my case would have easily shown this, for example:

Yellow. RGBA is (1, 0.92, 0.016, 1), but the color is nice to look at!

Unity doesn't use RGBA values of 0-255, it uses normalized values of 0-1. Maybe that's a Scandanavian thing? I do like the consistency of it, Alpha is usually going to be 0-1 anyway... I tried to learn a bit more about this but I soon wandered into a forbidding land of shadows and had to turn back. Not tryng to be incurious here, but your basic safety orange should not require post-graduate math.

Unfortunately my "what if this works" first stab of just dividing each of the values by 255.0f gives us roughly

Color orange = new Color(1.0f, 0.3803f, 0.5019f, 1.0f);

Which renders onscreen as a sort of ill salmon color. Then suddenly, in the next tab, a light shone upon me, for the good people of Unity had, by their own divine providence, thought to include a built-in function to do this exact thing.

Moments later I had

Color orange  = new Color32(255, 97, 3, 255);

Which pops out blazing orange. I believe it was Ben Franklin who said, "I only try to understand something complicated until I figure out something simpler that will allow me to stop trying to understand the more complicated thing." As for the mystery of why my totally wrong purple was somehow sort of purple, I guess if you overflow the bounds of whatever kind of structure Color is, it just holds the max value, like a cup of coffee being filled by someone who has fallen asleep. (128, 0, 128, 1) is to Unity the same as (1, 0, 1, 1), which is Magenta, which looks sort of like purple, in a certain light.

Sunday, April 14, 2013

T



I wanted to get a little deeper into some methods for storing and retrieving information. With some research, I found that in CS terms I was thinking about data structures, so I hopped on my favorite internet book ordering behemoth and in a few days I had a copy of Ron Penton's Data Structures for Game Programmers. It comes highly recommended from folks at work, some of whom are even thanked in the front pages, so I know I'm on the right track with this one.

One tiny concern was that the book's examples are in C++, while I'm working with Unity in C#. Well, how big a deal could it be? I'll just re-write the example code and Bob's your uncle. It took me about a dozen pages to get in trouble.

Ron wants you to be sure and understand a few things up front. The first is big-O algorithm complexity analysis (which I'll tackle in another post .. someday) and the other is templates. These are functionally equivalent to C#'s generics, right? No problem. The first example I tried to port over was a C++ function for adding either floats or ints together, depending on which was passed in. Well, it turns out in C#, things like this

public T  Sum<T>( T p1,  T p2) 
 {
     T sum;
     temp = p1 + p2;
     return sum; 
 } 

just straight up don't work. Why not? further research led me to an interview with lead C# architect Anders Hejlsberg, who I will quote on this very topic:
"...in C# generics, we guarantee that any operation you do on a type parameter will succeed. C++ is the opposite. In C++, you can do anything you damn well please on a variable of a type parameter type. But then once you instantiate it, it may not work, and you'll get some cryptic error messages. For example, if you have a type parameter T, and variables x and y of type T, and you say x + y, well you had better have an operator+ defined for + of two Ts, or you'll get some cryptic error message. So in a sense, C++ templates are actually untyped, or loosely typed. Whereas C# generics are strongly typed."
Undaunted (the book is big and wasn't cheap), I dove further online, turning up a set of C# files called the Miscellaneous Utility Libraryput together by a Google engineer named Jon Skeet. He and Marc Gravell (one of the Stack Overflow guys) put together this solution for doing what they call "maths" on their side of the pond in the context of generic classes in C#. A small download and a using statement later, I was stuck again, as the Unity compiler didn't want to recognize the construction Operator<T>, for reasons that still hover slightly beyond my understanding. 

The MiscUtil pages reference an article by one RĂ¼diger Klaehn, a "freelance developer in the space industry", which sounds like the coolest job ever. He articulates the problem succinctly:
"To constrain type parameters in C#/.NET, you specify interfaces that the type has to implement. The problem is that interfaces may not contain any static methods, and operator methods are static methods."
RĂ¼diger presents two solutions, one of which I sort of understand and the other, which is far more performative, I don't understand at all. Fortunately I doubt I'll need to use so many numerical generics every frame to cause a slowdown in Unity (how many would that take I wonder) so I felt confident going with the first option. 

I was mildly dumbstruck by the idea that you could just declare a function without a body like this:

public abstract T Add(T a, T b);

as long as it's something abstract that you plan to override. I was even more nonplussed by the idea that I could use

namespace Int32
{
//custom calculation function
}

to just reach into the guts of how Unity deals with basic numbers, and just scribble in the margins as it were. That's really cool. Finally, when I got to the following, my brain broke:

public class CalcMethods<T> where T: new()
{
//a generic adding method + whatever else you need
}


I've seen cases of putting code in a function definition, with what they call a lambda, and I suspect this is something similar, but it's still blowing my mind.

In any event, I now have a function that takes a List of either floats or ints, and adds them together, and its easily extensible to whatever other kinds of numbers might come up, so I guess I'm ready to continue. At this rate my side-quest through this data structures book will probably take about a decade. Better brew another pot.



Friday, April 12, 2013

Eat Your Serial



Starting in on the process of making Level Two is a big wrenching moment, kind of like seeing my 2D game in perspective view. It becomes obvious how much is assumed, hardcoded, made of magic numbers. Wires must be pulled out. Questions as simple as "where does the player start" might have different numerical answers for each level. This is one of the most common questions on public game dev forums: "I have a bunch of information I need to put in the game, like a long list of magic spells with all their damage and cooldown numbers, how do I represent it?" In my case it's a fairly small list of things like the scene's loadable path, where the bus should pause to pick the player up, etc. but the principle is the same. The actual process I still don't really get, it's mostly cribbed from a few different tutorials, who have a few implementation bits in common but diverge otherwise.

I used an XML file and the System.Xml.Serialization namespace. One of the funny things about Unity is that as often as not when I'm leveraging its powers to do something I'm actually just leveraging the standard MSDN libraries, which hey, System.Xml has more things you can do with an XML file than I'll ever want or need, so why reinvent the wheel? Like I said, the tutorials I checked all have their own take. There's always an XmlSerializer and a FileStream, but sometimes there's an XmlTextReader (which I didn't end up needing), sometimes an XmlNodeList is employed (didn't use that either), but the end result is what matters, and what I got works. Also kind of cool that I have to use stream.Close() because I'm in one of the few corners of C# where memory isn't fully managed. It's like, retro vintage, man! The only annoying bit is that classes that are derived from UnityEngine.Object do not appear to be serializable, so if you need to pull something like a Vector3 you're stuck hacking in translation functions that will grab three float nodes and smoosh them together or something equally ugly.

I'm really grappling with the metadata syntax as well, and I'm getting a mental picture of them as sort of magnets that allow you to attach properties in your program to external data that may change outside the scope of the program in ways you don't want to worry about. Probably not exact, but a good enough image for now.

The sudden possibility of storing and using large amounts of easily tweaked data is of course leading me to all sorts of other ideas, but no, for now we will stick to the plan and the schedule. Last sprint wrapped, a few things got punted. Inevitable.

I am starting to understand enough coding to sometimes sense that I am making a poor decision, although not enough to understand what the preferable course might be. My options / death / audio menu is a flying carpet made out of GuiLayout.BeginArea and transform.position.y, where I'm doing a three-way switch in the Update() to decide which of three sets of (crudely laid out) text buttons to show. I've always had trouble with "layout languages" like CSS, or basically anytime I have to think in interconnected scaling squares with various interdependent attributes ... makes my head hurt. I've hacked it up enough to work, but I don't know, I might return to it.

Mostly though it's been sweeping up stuff like the transitions from one level phase to another - making sure we don't let the player move and shoot when the level end animation is playing, stopping some bugs where bullets got huge and lived forever, trying slightly to improve the art, various little tweaks.

It's going to be that way for a while, I'm looking down the list and it's a lot of menu this and the skeleton dies wrong that, and move the camera when you do the other thing. It's the long bleak desert of the real, Neo, it's game development.

No, once I get the structure set up to slip between levels I can plow through the art and design for the other neighborhoods and that will be a lot of fun. I just need to work a little faster...

Monday, March 25, 2013

Back in the Triple Digits



Just truckin through my backlog, on my way to full vertical slice, and not that far off either. The shooting and jumping aren't exactly ... inspiring, but at least they work. I spent a lot of time fooling with gravity when I was making the floor and platform tiles, trying to find that right scale for the "running jump" mechanic. The way I wanted it to work, you could complete the level just walking, like if you didn't ever realize holding the fire button makes you go faster (I do change the animation, I tried to make it obvious), you should be able to get all five bus transfers just by jumping normally, although you'd have to take some circuitous routes and it would be a pain in the ass. At the same time the running jump couldn't be too powerful, or since the platforms are so close together you get this queasy thing where the player just jumps toward the side of a platform, slides up, and ends up standing on top, which isn't very jump-like. I tried adding a RigidBody component for physics jumps in the hopes of getting something springier, but the intersection of the RigidBody and character controller was causing some Exorcist-like behavior so I bailed on that one. I'm not thrilled with where it's at now, but I can live with it a while longer.

Anything the player can touch is a primitive 32x32 cube displayng a frame (via a scaled and offset material) of one of two 256x256 tile sheets. The buildings and clouds are big textures (the clouds are 4096 wide) painted on scaled up quads, and scripted to move in parallax to the player. Most of the collision volumes are cubes the size of the tiles, but platforms have scaled down collision boxes, as do some of the objects. I'm still using the old pickups but I'll probably make them tiles too eventually. The frame rate hangs reliably in the triple digits. This concludes, knock on wood, the technical implementation of the guts of the game. The rest is window dressing, oh and design.

I re-learned some things about instantiating and destroyng prefabs. I spent a little too much time re-learning that a switch/case runs on break; statements and yes, they mean it, and any code you had in that function below the switch/case is not getting run, which is probably why it is not as effective as you thought it would be. At least, I think that's what happened ... regardless, it works now. I re-learned that your first level design instincts are often too big. This level is actually pretty small for a platformer now, but only because I wanted everything to feel closer together. In a sense all the objects really are kind of tetris-jammed together, but it's really the same set of elements from the larger draft of the level, just brought in tighter, I think I only deleted like two objects. It's odd that I got a lot of stuff right the first time but I got the scale totally wrong.

The actual design of the level, given the constraints imposed by the grid, the jump tuning, and the five bus passes (plus a bus stop), was dirt simple. Figure out how to make a bus pass hard to get to, then do four variations. One is on a catwalk, another is tucked behind a roving skeleton, one is in a corner. None require run jumps but all are made easier by run jumps, run jump is frankly a little OP right now but it feels better than it did. I even started to polish a little by adding particle effects to the bus passes, but of course now I'll need particle effects for everything...


Next is the level-end bus-pickup "experience", the fail/restart, maybe some options geegaws, and another say three levels, all built with similarly small tilesets, maybe one extra enemy per level and that's about it. With my schedule the end is still months away, but I can see it, coming ever more clearly into focus. Gotta get this puppy out the door!

Tuesday, March 12, 2013

Performance Anxiety



What little rationale there was behind building a 2D tile-bsed platformer in Unity went something like this: learn the tools and workflow on something small enough where you won't have to worry about optimizing for performance. This was naive.

The tile map implementation I described previously involves painting tiles in the scene view, each tile consisting of a textured mesh. When I picked my head up from my second layout of the opening level (wonder what number that will get up to) and pressed Play, Unity thought for awhile. Twenty-one seconds, to be exact. My gameplay and scenery grids combined were holding 8087 tiles, each of which contained a single quad.

Many things were vexing about this. For one, the stats window showed 384 verts comprising 192 triangles, which didn't seem like all that many to me. For another there was the fact that after that twenty second startup, Unity seemed pleased enough to hum along at a triple-digit FPS while the playfield was traversed; there was no in-game performance hit. Without a deeper understanding, or the CPU/memory profiler Unity offers for a mere fifteen hundred bucks (along with a Pro license), there was little I could do but conclude that I was just Loading Too Many Things.

Idea #1: "Well, how many is too many?"


My Test mind kicked in. I wanted to repro the issue, isolate it from its context. I made the scene shown above, with a total of 10180 red cubes, boasting 104,800 vertices making 52,400 triangles. The whole thing loads in less than three seconds. OK, interesting. Apparently 8087 tiles is not too many. What else is going on?

Idea#2: "It's the materials."


I was doing what I thought was a really swell thing by confining my entire city scene to one 256 x 256 texture made of tiles, which were arranged for display, uv-wise, by a material unique to each tile. Maybe that's bad? Really grasping here, not even mentioning stuff I thought of before the red cubes, like stitching all the meshes together at runtime (possible, but goodbye texture data and anyways I tested it and that would happen in script after my startup problem is already come and gone, so who cares?).


Anyway the point is I was desperate for a simple answer. I decoupled all the materials from all the prefabs, turning everything in the level pink. My start time somehow went up to 24 seconds.

Idea #3: "It's a script"


Since this was a one-time perf hit at startup, perhaps one of the scripts was looping over the tiles at startup in some poorly thought out way. I feel like I would have noticed that earlier than I did? Whatever though. Let's comment everything out. 24 seconds.

Idea #4: "Uhh maybe ... the tile editor's Instantiation method, isn't that deprecated?"


Ugh fine whatever. Let's rebuild the level from scratch, with the same tools, and figure out where and when the perf hit starts to happen. OK looks like when we get over about a thousand of these we start to bog down. With the new version of Instantiate, and without materials. These objects are piling up at around 1000, and the cubes were loading like butter into the quintuple digits.

Idea #5: "Something about a mesh as opposed to a cube?"


Turned off shadows. Tried a cube. Tried a sphere. 


PrefabUtility.InstantiatePrefeab and it's deprecated Editor cousin allow you to do things like paint in scene view, you're painting with prefab instances, so it's hard for me to conceptualize that you would need to Instantiate all those again at runtime, because I already did that, I'm looking at them, they're right there, but I guess what I'm looking at could be a preview of some kind. It would make sense that it would take some time and resources to bring all those cubes into being, and that's really the only thing left that seperates that tower of cubes from my level: the cubes aren't prefab instances.  

That points me toward really dreadful things like asynchronous level loading (another pro feature), and eventually I arrive at

Idea #5: "I guess we'll have to find a way to render a level without loading thousands of tiles"


I guess you could, you know, use a hundred or so tiles for sidewalks and platforms, then just draw some backgrounds and put them on big quads. Rather than, you know, making a skyscraper out of hundreds of individual meshes, each uv mapped to part of a single small texture, a "solution" that somehow joins the worst aspects of all possible approaches. I was so close, though! Throw that texture in and just paint the level, so convenient! Wait, hold on, if prefabs are the problem, why not

Idea #6: Alter the editor window script to stop instantiating prefabs and instead make meshes from scratch with the appropriate materials on them!


Oh lord. All right, worth a shot. Redid the tileMap code to make a primitive again, like the demo script, instead of instantiating. Used cubes as Unity has no Quad primitve type. Rotated the cubes. Updated the custom editor to paint materials instead of prefabs. Applied a variety of textures to simulate what I'm actually doing in the level.


Bam. 7956 cubes. Feels like between three and four seconds. Now, though, even that length of load feels kind of intolerable. Eh, I think if I do the buildings as scaled-up single quads and leave the tiles for the floor and platforms I'll be OK, shouldn't be more than a second. The lesson, as far as I can tell is: prefabs are slow. That is, unless after my third buildout of the level I find out the problem is actually something else...