Go to Top

A* can be a fantastic foundation for smart enemies - its about more than just finding the shortest path! This first tutorial gives an introduction to A* and provides a way of creating a suitable NavMesh then uses it to implement the algorithm.

Motivation

You should read this article if:

  • You want smarter enemies (and not just ones that follow the shortest path!)
  • You want to understand an A* Path finding algorithm implementation in Unity
  • You'd like to learn about building a navmesh
  • You'd like to see Linq and threading in action

Resources

Project Video

A quick introduction to the project associated with this article.


Fundamentally Smart

A* is the basic way to make you enemies smart - because it's not just about finding the shortest path between two points.

With A* you can:

  • Find the path between two points
  • Based on the current threat level:
    • Find the shortest path
    • Find the path that avoids pinch points
    • Find the path that keeps in cover
    • Find the path that avoids where the player is looking
    • Find the best retreat from a fight
    • Find the best path to draw the player into crossfire
    • Find the best path to draw the player away from an asset
  • Find the path that offers the most cover from a current attacker

Kinda cool!  In fact A* can pretty much do anything you can think of, presuming you have the nous to writing an algorithm that weights a path position for a given outcome.  In part #2 we will look at examples of this kind of intelligence - but first you need to understand what A* is and how to implement it.

A*

Ok, it's got a silly name, I admit it - and it can also be quite intimidating.  You can find some great implementations of A* including Aaron Grandberg's great A* Pathfinding Project - but there's nothing quite the same as understanding it for yourself, even if you use someone else's implementation.

A* uses a "best-first" approach to finding a path between the start and end points - in other words - it tries to go the logically shortest way first, then backs up if it can't make it and tries other short options next if the shortest path is blocked.

This is how A* works:

  • Create an open set of all of the nodes that haven't been considered yet and have been discovered as a part of path finding
  • Create a closed set of all of the nodes that have been considered
  • Add the starting position to the open set

For each node we need to keep two scores - the first is the lowest known cost of getting to the node.  Later on we will use cost as something other than distance, but for now, imagine that is the number of world units that we travelled to reach this node (I'll show you how we calculate it next).  We call this the g score.

The second score is our best guess at how far it is from the start to the end positions via this node!   How do we guess?  Well we take the known cost of getting to this node and then we estimate the distance to the goal (in our case we will use the Manhattan distance) and add them together. We call this the f score.

Manhattan distances are the distance you would have to travel in a city comprised of blocks, such that at each intersection you can only move North, South, East or West.  In such a system there are no diagonals, no short cuts in other words.  The benefit of Manhattan distance is that it does not involve a square root because we can just add up the difference in x and y coordinates, yet it remains a reasonable estimate of the distance.

Ok so thats the setup - next the actual algorithim:

So for the best node (lowest f score) in the open set we check if it's the goal position and return if it is -otherwise we find all of its walkable neighbours that haven't already been processed and for each one, calculate the cost of reaching that node via the one we are considering.  Then we check whether we already know of a better (less costly) way of reaching the neighbour, if we don't then we setup an open set node for the neighbour with the cost of getting to it and our estimated cost to the end position.

An example may help:

Her we have a starting position (in green) and a goal (in yellow).  On the first step of the algorithm we find all of the walkable neighbours and calculate a g and an f score for them.

The node with the lowest f score is the one indicated by the green arrow. So we consider that one next - on this step we now will have some candidates for which our g score via the current node is higher than the g score from the first test - so they won't be updated (in the graphic below they have a tg which is higher than the cells g):

On the next step we again take the lowest scoring open set node (the node in red shows the only one currently in theclosed set):

After processing that - again we take the lowest f scoring node:

After that, in this case it's just a straight run for the finish:

 

 

Here's an example of the A* algorithm seeking a target. Red to green indicates the distance from the target - open blue nodes are the ones in the open set, the coloured ones are in the closed set.

 

 

 

 

 

NavMesh

So in order for that algorithm to work we actually need something to represent the walkable and unwalkable positions in a terrain - we call such things a NavMesh - the easiest one is a Grid.  We can create a grid map that overlays the terrain and creates a simplified map of it for our algorithm.

In this picture we take an example scene and overlay a grid showing, in green, the walkable areas.

In Unity creating one of these is surprisingly easy!  All we need is a cube to define the bounds of our map and then ray cast each cell to see if it is walkable or not.

First we need a way of representing a cell in the map - for now we just care about walkable/unwalkable and the height of that cell in the world:

	//Representation of a grid cell
	public class GridCell
	{
		public bool walkable;
		public float height;
	}

Now we actually need a map - so clearly that should be a 2d array of these cells:

	//Width and height of the map in cells
	int width, height;
	//Active map of the world
	public GridCell[,] cells;

And then to convert it from World coordinates to grid coordinates we need a size for each cell:

	//Size in world units of a cell
	public float cellSize = 0.5f;

So we will now build a cube in the world, attach our script to it and use the bounds of its renderer to map where we want to put our navmesh (obviously we'd normally use a semi-transparent material!).

We need to work out the size of the map from our cubes bounds and the cell size and then do a couple of for next loops to actually find the terrain or another collider.  We also need a mask to indicate what colliders are walkable and which ones are not.

	//Scan the bounds of the model and create a grid
		bounds = renderer.bounds;
		//Work out the top left corner
		topLeftCorner = bounds.center - bounds.extents + new Vector3(0, bounds.size.y, 0);
		//Calculate the dimensions of the grid map
		width = Mathf.RoundToInt(bounds.size.x / cellSize);
		height = Mathf.RoundToInt(bounds.size.z / cellSize);
		//Create the grid map
		cells = new GridCell[width, height];
		//Scan for walkable terrain in each cell
		for(var x = 0; x < width; x ++)
		{
			for(var y = 0; y < height; y++)
			{
				//Get the position for a ray
				var currentPosition = topLeftCorner + new Vector3(x * cellSize, 0, y * cellSize);
				RaycastHit hit;
				//Create a cell for the grid
				var cell = new GridCell();
				cells[x, y] = cell;
				//Cast the ray
				if(Physics.Raycast(currentPosition, -Vector3.up, out hit, bounds.size.y))
				{
					//The height of the highest item in the cell
					cell.height = hit.point.y;
					//Test if the thing we hit was walkable
					if(((1 << hit.collider.gameObject.layer) & walkableLayer) != 0)
					{
						//Flag the cell as walkable
						cell.walkable = true;
					}
				}

			}
		}

When that's finished we actually want to run some "post processing" because at the moment it doesn't use any kind of concept of the width of a character and we can bake standard widths into the mesh.  In the example project with this tutorial we have build a plug in system that will call specially decorated classes - passing them the grid we make and letting them modify it.

Using that we can define a Radius Modifier that looks to make unwalkable areas a bit larger:

using UnityEngine;
using System.Collections;
using System.Linq;
using System.Collections.Generic;

[ProcessingPriority(1)]
public class RadiusModifier : BuildGraph.IProcessGrid {
	#region IProcessGrid implementation
	//Post process the grid
	public void ProcessGrid (BuildGraph builder)
	{
		//Make a list of nodes to flag as unwalkable
		//we can't immediately update them or the
		//whole thing would go horribly wrong as it
		//scanned its own output!
		var unwalkable = new List<BuildGraph.GridPosition>();
		//Run through the grid
		for(var x = 0; x < builder.width; x ++)
		{
			for(var y = 0; y < builder.height; y++)
			{
				//Get a current position
				var currentPosition = new BuildGraph.GridPosition { x = x, y = y };
				//Get all of the neighbours within 2 grid units and see if any
				//of them are not walkable
				if(builder.GetNeighbours(currentPosition, 2).Select(cell=>builder.GetCell(cell)).Any(gc=>!gc.walkable))
				{
					//If so add this cell to the unwalkable
					//list
					unwalkable.Add(currentPosition);
				}
			}
		}
		//Update the map
		foreach(var unwalk in unwalkable)
		{
			builder.GetCell(unwalk).walkable = false;
		}

	}
	#endregion

}

Seeking a Path

Having made a reasonable implementation of a grid navmesh - all we need to do now is some pathfinding!

We create a seeker script that is attached to the thing that wants to follow a path:

using UnityEngine;
using System.Collections;
using System.Collections.Generic;

public class Seeker : MonoBehaviour {

	//Target object
	public Transform target;
	//Cache to know what our
	//last path was for
	Vector3 lastDestination;

	//Route to follow
	public List<Vector3> route = new List<Vector3>();
	//Position on the route
	public int routePos;

	//Something that knows how to move
	//our character
	Movement movement;

	void Awake()
	{
		//Something that can be told where to go
		movement = GetComponent<Movement>();
	}

	// Update is called once per frame
	void Update () {
		//Check for a target
		if(target == null)
			return;

		//Get the position from the target
		var targetPosition = target.position;
		//Has it moved
		if(targetPosition != lastDestination)
		{
			//Update the cache
			lastDestination = targetPosition;
			//Ask the grid to build a path
			BuildGraph.instance.SeekPath(transform.position, targetPosition, (path)=>{
				//This is called when we get a result
				if(path == null)
					return;
				//Update the route
				route.Clear();
				route.AddRange(path);
				routePos = 0;

			});
		}

		//Are we at the end of the route?
		if(routePos < route.Count)
		{
			//Set the target to the current route position
			movement.targetPosition = route[routePos];
			//Check for the next route position
			var distance = Vector3.Distance(route[routePos], transform.position);	
			if(distance < movement.speed/2)
				routePos++;
		}

	}
}

This will track a scene object and try to get to it - it calls our SeekPath function, which is where the magic happens, every time the target moves.  Note that we use a callback function to update the path so we can use multiple threading.  This script interacts with a movement script which just moves towards a target position - the seeker updates that target as the enemy follows the path.

A* Implementation

Ok so it's time to look at the A* SeekPath function itself:

	//Find a path between two positions
	public void SeekPath(Vector3 startPosition, Vector3 endPosition, Action<IEnumerable<Vector3>> onComplete)
	{
		//Scan if we don't have a map yet
		if(cells == null)
			Scan();

		onComplete = onComplete ?? delegate {};

		//Start and end in grid coordinates
		var start = GetGridPosition(startPosition);
		var end = GetGridPosition(endPosition);

		//Run multithreaded
		Loom.RunAsync(()=>{
			try
			{
				//Set of considered nodes
				var closedSet = new Dictionary<GridPosition, Node>();
				//Set of all nodes processed (so we can rebuild the path)
				var map = new Dictionary<GridPosition, Node>();
				//Set of nodes yet to be considered
				var openSet = new Dictionary<GridPosition, Node>();

				//Create a node for the start
				var startNode = new Node { f_score = end.Distance(start) };
				//No cost
				startNode.g_score = 0;
				//Add to the map and opensets 
				map[start] = startNode;
				openSet[start] = startNode;
				//Get the time so we can abort if it takes too long
			    var currentTime = DateTime.Now;
				//While we have nodes
				while(openSet.Count > 0 )
				{
					//Check if we should abort
					if((DateTime.Now - currentTime).TotalMilliseconds > maxSeekTime)
					{
						break;
					}
					//Get the best possible node
					var best = openSet.Aggregate((c,n)=>c.Value.f_score < n.Value.f_score ? c : n);
					//Remove it from the open set and add it to
					//the closed set
					openSet.Remove(best.Key);
					closedSet[best.Key] = best.Value;
					//Have we reached the target?
					if(best.Key == end)
					{
						//Recreate the path
						var path = new List<Vector3>();
						var scan = best.Value;
						//Add the actual end position
						path.Add(endPosition);
						//Scan backwards from the end of the path
						//until scan.cameFrom is 0
						while(scan != null && scan.cameFrom != GridPosition.zero)
						{
							//Add the current node to the START of the path
							//thereby reversing the direction of the list
							path.Insert(0, GetWorldPosition(scan.cameFrom));
							//Get the next node
							scan = map[scan.cameFrom];
						}
						//Update the caller
						Loom.QueueOnMainThread(()=>{
							onComplete(path);
						});
						return;

					}

					//Get all of the neighbours of the current cell
					//that are walkable
					foreach(var cell in GetNeighbours(best.Key).Where(c=>GetCell(c).walkable))
					{
						//Have we processed this already?
						if(closedSet.ContainsKey(cell))
						{
							continue;
						}
						//Work out the cost to the neighbour via the current node
						var tentativeGScore = best.Value.g_score + GetWeightOfMovingBetween(best.Key, cell);
						Node currentNode;
						//Is the neighbour already open?
						if(!openSet.TryGetValue(cell, out currentNode))
						{
							//If not then create a node for it
							//this will have a maximum g_score
							currentNode = new Node();
							//Add it to the map and the open set
							map[cell] = currentNode;
							openSet[cell] = currentNode;
						}
						//Is the new g_score lower than the
						//current one?
						if(currentNode.g_score > tentativeGScore)
						{
							//Update the openset node with this
							//new better way of getting there
							currentNode.g_score = tentativeGScore;
							currentNode.cameFrom = best.Key;
							currentNode.f_score = tentativeGScore + cell.Distance(end);
						}

					}

				}
				//Indicate failure
				Loom.QueueOnMainThread(()=>{
					onComplete(null);
				});
			}
			catch(Exception e)
			{
				Debug.Log(e.ToString());
			}
		});

	}

As you can see we have a special struct representing grid positions called GridPosition and we call a number of helper functions to manipulate them, but I'll leave you to dig into the code if you want to learn more about them.  Hopefully you can see how the algorithm is processing the nodes and the set.  We have chosen Dictionaries as our method of storing data due to their O(1) lookup performance - so these are very fast to look up, but our "find the node with the best f_score" is running in O(n) time so that if the routine has to back up often it might start getting expensive - but this seems the best trade off between clarity of code and performance in this case.

Conclusion

So a lot of theory in #1 - in #2 we will get on to the actual job of making enemies smarter and look at ways of detecting pinch points and cover - then apply that to smarter path finding.

Tutorial Project

You can download the project from here.  It includes some great free assets from the Asset Store :)

In the project there is an enemy that follows a sphere - the scene view draws a line showing the path, in the game view you can click on the ground to move the enemy there and control the camera angle with the scroll wheel and the right button.
.

, , , ,

49 Responses to "A* #1: All Journeys Start with a Single Step"

  • Joseph
    November 27, 2012 - 4:31 pm Reply

    I’ve downloaded the AStar.unitypackage (twice) and every time I try to install the AStar package, I get an error message. It decompresses okay, but when it starts to import, 2/3 of the way in, it gives me this error message:

    Fatal error!
    type == kMetaAssetType & pathName.find(“library/metadata”) != 0

    …and then unity shuts down.

    How can I solve this problem. I’m using Unity version 3.5.5. Thank you!

    • whydoidoit
      November 27, 2012 - 5:57 pm Reply

      I am reuploading the package (which is exported from 4) hopefully that will work for you (but it will take a while to upload!)

      • Fraser
        February 5, 2013 - 1:10 pm Reply

        Hey, I’ve just downloaded the file, and it is doing what Joseph described above – except I am using Unity 4.

        Any chance of both versions being put up?

        Great article :)

        • HB
          March 22, 2013 - 6:13 am Reply

          Hi where’s the unity 4 upload for A*

      • deltamish
        April 8, 2013 - 3:46 pm Reply

        Really Nice Tutorial Loved it Can you please make a tutorial on how to mae an NPC think tatically(just take the path that has a lot of cover)

  • whydoidoit
    November 28, 2012 - 2:30 am Reply

    A new Unity 3 project has now been uploaded and verified working.

  • Joseph
    November 28, 2012 - 12:05 pm Reply

    It fixed the problem. Thank you for the U3 upload. As a matter of practice, I usually do not upgrade to new versions immediately because of the experience of knowing that new versions usually have bugs and I don’t like to “break” projects I’ve created thus far. I usually wait till x.1 versions appear after all bugs have been reported by users. This is just from experience with many products I’ve used over the years. I hope this is not an inconvenience to you and I hope that you’ll continue to compile under the 3.5.X version of Unity for now. Thank you for your great contributions!!

    I do have two questions:

    1) Can you (or will you) create tutorials on Influence Maps? I’ve not seen anyone create any IMs using Unity and getting information on implementing them on Unity has been a struggle for me. The combination of IMs and A* navigation/waypoints is powerful if used together, but I don’t know where or how to get started on that.

    2) Will you be doing any tutorials on dynamic A*? I’ve not seen any D* implementation as well. The unity NavMesh doesn’t seem to allow for real-time changes of nav planning after an static object has been moved (such as damaged bridges or road blocks). This too I would like to use with Influence Maps.

    Again, Thank you so very much!

  • whydoidoit
    November 28, 2012 - 12:09 pm Reply

    Funny – I was just fiddling with influence maps – the pattern is to allow “plug-ins” to overlay their data on the underlying grid and have the Seeker provide a series of intelligence options – so the first of these is simple stuff like hide in cover, avoid pinch points etc.

    The next step is to use an overlay where the seekers write themselves into an overlaid grid and probably use some kind of vector driven influence in the cells which then push the other seekers out of the way.

    Unfortunately I’m a bit stuck with using U4 for work, but I will now maintain a U3 version for this series.

  • Joseph
    November 28, 2012 - 12:58 pm Reply

    I read an article about Influence maps where there can be multiple layers of maps used to help characters make decisions about their topological and game-state analysis and then navigate based on:

    1) My Influence – All Influence coming from my units,buildings etc
    2) Opponent Influence – All influence coming from opposing units,buildings etc
    3) Influence map – Calculated as (My Influence – Opponent Influence)
    4) Tension map – Calculated as (My Influence + OpponentInfluence)
    5) Vulnerability Map – Calculated as (Tension map – Abs(Influence map))
    6) Block map – Information about tiles in the influence map that are blocked.
    7) EnemyBlockMap – Information about tiles in the influence map blocked by being inside too strong an enemy influence.
    8) Goal map – A map where we can add in extra influence to guide our troops to different locations.
    9) Area map – A map containing an ID for each tile. Having the same ID means there is a possible path between the tiles.
    10) Strength map
    11) Final Influence Map – (Influence Map + Goal map).

    I got this info from a site. But I think you understand what I mean. By having this type of information, I can dictate how characters can make decisions about making flanking enemies from proper positions, what buildings to navigate around to make assaults, how far to place artillery units for maximum effect without being assaulted, etc. I don’t expect you to address these issues, but If you can create a small influence map tutorial implemented with Unity, using some dynamic navmesh or A*, that would be very helpful.

    I’ll be upgrading to U4 after this series. What you’re doing is fantastic and I’ve learned quite a bit from the Unity and C# perspective of game development. I come from the C++ world (almost 20 years+) and I’ve been using Unity for a year and this site has really helped me fill some gaps! Thank you!!!

  • whydoidoit
    November 29, 2012 - 7:33 pm Reply

    Sure, I will be covering a lot of that stuff in the next part – at least the methods of actually starting the process and building clusters from influence map raw data…

  • whydoidoit
    November 30, 2012 - 8:51 am Reply

    Did I see a video of yours on Reddit recently? Assassin’s Creed clone? If so I was wondering if your surrounding AI was driven by influence maps or something else?

  • Joseph
    November 30, 2012 - 2:15 pm Reply

    @Anomalous Underdog
    I’ve actually played around with your influence code about several months ago and found it quite interesting! However, there was no documentation provided that with the code or demo that sufficiently explained what you were attempting to achieve or how it worked. The code seems to be written very well, but there is virtually no comments attached to the source code and I spent quite a number of days trying to figure out what did what and what some of the variables actually represented.

    Also, there was no real demonstration (as far as I could tell) of characters being influenced by the presence of other characters in local vicinity, so I was not really able to fully grasp what you were trying to accomplish.

    I do believe that you’ve laid a really good foundation for possibly building a really nice influence map structure. If you could, however, provide a linkage between the IM and with some tactical decision process, showing how it would work in a real game, that would have been extremely helpful!!! Oh, and btw, thanks for the contribution. I’m hoping that between this lesson and your implementation of your IM, I might be able to finally tackle IM’s in Unity! :)

  • Anomalous Underdog
    December 1, 2012 - 4:54 am Reply

    @whydoidoit: Yeah, though the encircling AI I made does not really make use of A* or influence map as that’s pretty overkill (there’s only one player unit anyway). At least for now. I do want to try some combination of influence map and what they call “collaborative diffusion” so the enemies don’t clump up together in one place. I’d probably add A* too once I add some obstacles.

    Right now, I simply have waypoints surrounding and following the player in a circle arrangement, and the enemies move through that in a clockwise or counter-clockwise fashion, whichever takes them to the player’s backside faster.

    @Joseph: The links I gave in that forum post regarding influence maps explain it very well. Judging from your previous posts here, it seems you got the gist of what influence maps are already.

    As you can guess that was just a quick experiment that I was stumped on and I was hoping other people would help me figure out how to do it better.

  • Joseph
    December 1, 2012 - 8:57 am Reply

    @Anomalous Underdog:
    I came across your blog at: Collaborative Diffusion/Influence Maps For Obstacle Avoidance
    (http://anomalousunderdog.blogspot.com/2011/12/collaborative-diffusioninfluence-maps.html) and found your demo (http://dl.dropbox.com/u/25260770/PathfindingTest/WebPlayer.html) using collaborative Diffusion very interesting! You wouldn’t happen to have a link for the Unity code showing how you implemented the algorithm?

    I read the article (http://scalablegamedesign.cs.colorado.edu/wiki/Collaborative_Diffusion) on Collaborative Diffusion and it seems to me that it’s still an Influence map. The author doesn’t really specify the difference or advantages. Do you know what they might be?

    Thanks

    • Anomalous Underdog
      December 2, 2012 - 2:59 am Reply

      @Joseph: Yes they are very similar if a bit different in intent. Collaborative diffusion mainly concerns proper dispersal of AI agents across the map without making them clump together (“if you already took that route, then I’ll take this one instead”) especially in bottleneck areas. It lets each agent pick their path to a goal in such a way that they won’t mindlessly bump into each other.

      I’m pretty sure you can implement such an idea with influence maps (or A*) too. The article presents a simplified influence map-like approach to it, but I don’t think the specific method it showed is the only way to achieve the idea of collaborative diffusion.

      Unfortunately I lost the source code to that experiment when one of my hard drives failed, but it’s pretty easy to implement, I’m sure you have ideas how to make it already.

      If it was A*, what I’d do for starters is, once an AI picks a path, I’d forcibly raise the heuristic score for the cells he took (and possibly a little bit for the cells surrounding that path), so that the other AIs wouldn’t likely choose the path he already decided to take. Once that AI finished passing those cells, I’d lower their score to normal again, one at a time, every time he passes by each cell. So that those cells can be used by other AI again.

      I’d imagine this makes the AIs not overtake each other. If you don’t want such obedient behaviour, you can tweak the algorithm to your liking.

      The fundamental idea here is that basic A* is to code attraction to a goal point, but what you can also add to that, is repulsion, to places that the AI shouldn’t move to. The article on this page even mentions that briefly.

  • whydoidoit
    December 2, 2012 - 3:05 am Reply

    So for part #2 of this I have a basic influence map generator with flood fills to show pinch point areas – to identify walkability between any 2 given segments and stuff like can I get from A to B without passing through a pinch point. I’m going to add stuff for unit effectiveness next.

    The basic idea is to overlay any number of maps – the pinch point is a good example as it looks for a very simple concept – the influence on a square of walkability (a count of the number of perimeter squares which are unwalkable). Having created that measure it then locates an area of “pinch”, finds the maxima in connected cells and flood fills a second map with an index for that area – then it repeats this. The practical upshot is you end up with:

    * A map of pinch (each square indicates how strong a pinch it has)
    * A map of pinch areas where each unique area has a different integer id
    * A descending sorted list of those unique areas

    Obviously I will then apply that to the concepts of dominance and tension – but it should provide a number of very useful access points to the interpreted data.

  • Joseph
    December 2, 2012 - 9:29 am Reply

    @Anomalous Underdog:
    I’ve been reading the pdf paper provided by the author discussing the theory of Collaborative Diffusion. The more I read through it, the more I see it’s potential. It does look a lot like influence maps, mixed in with field potentials (if that’s the proper analogy to use), and some steering behaviors.

    Using his method, you say that the “I’ll take this route while you take that route” is the behavior that the NPCs utilize so as to not clump together. According to the author, he uses ‘anti-objects’ in order to achieve this. In essence, he is making the path environment (floor) decide which NPC to navigate toward it’s goal. So, if there are 4 NPCs, and there are 4 routes to take to get to a goal, then the ‘floor’ will make each NPC take each separate route, so as to avoid clumping together. My question to you, since you seem to have a working knowledge of this, is what if I don’t want the NPCs to split up on that specific occasion? I might, for instance, have a squad of soldiers, and I want the soldiers to travel in a file or column formation during an assault? How would the ‘floor’ know NOT to break them up at that moment in time? Do you see where I’m coming from? I would have to either write code into the anti-object floor, or I would have to write OOP code into the agents that are supposed to be void of these decision making behaviors (according to the anti-object premise). I’m still trying to get a grasp of how this would work in practice in various tactical situations. So, are you saying that the collaborative diffusion would have to be mixed in with A* as well as creating tactical waypoints at choke points, doorways, etc. with behaviors instructing the agents to abort dispersal and so forth? If so, then why not just use multiple layers of influence maps, mixed with potential fields and tactical waypoints and connections?

    I also noticed in the paper that they make no mention of Dijktra’s flood fill, nor of BFS or DFS flood-filling. They use a heat diffusion equation to fill the cells (in real-time) that I think is very interesting! I get the equation that they use. What I don’t get is what is the algorithm for not spreading out if there is a wall or object blocking the diffusion? Thanks for the replies!

    @whydoidoit:
    While you’re calculating choke (or pinch) points, is it possible to also show how to determine ambush points, since they do go hand-in-hand. In other words, a red squad will try to avoid a choke point while a blue squad recognizes the existence of that choke point and positions itself accordingly (behind walls, objects, trenches, etc.), waiting for the red squad to get into the choke (or ambush zone) area and then be attacked. Also, will you be employing decision/behavior trees or FSMs?

    Thanks!!

  • whydoidoit
    December 2, 2012 - 11:27 am Reply

    Well precisely – so my identifier technique lets you do this:

    * Enemy identifies player as probably trying to reach one of two or three goals
    * Using the pinch point identifiers it can instantly identify the most likely ambush points
    * “Cover” detection can provide the opportunity to choose the best hiding places

    As for a model – I normally mix Bayesian/decision tre stuff with the Orwellian state machine pattern to provide master/commander/unit control.

  • Anomalous Underdog
    December 3, 2012 - 3:14 pm Reply

    @Joseph: That’s right. Don’t think that any one method by itself is like a silver bullet that solves everything. Think of them as tools in a toolbox where each tool is appropriate for particular situations. Squad formations are a different topic but you can certainly mix that with pathfinding. Just understand that you need to add an additional squad-formation-making system/code on top of your pathfinding to pull it off.

    For something like a column/file formation I’d imagine only the one at the front really does the pathfinding to the goal. The rest just does pathfinding to the person in front of him instead. Preferably you’d want to code this in such a way that its easy to turn this following behaviour on or off in an instant, so that their formations remain flexible (anyone can become the new guy at the front, break off into sub-groups, etc.)

  • Jay Kay
    December 8, 2012 - 5:04 am Reply

    Hi. There is one thing I have never understood, how is the Heuristic value determined? Every guide I read just says “estimated distance to the last position”? And from above “then we estimate the distance to the goal (in our case we will use the Manhattan distance) and add them together. We call this the f score.”. I cannot understand where this number comes from. Is it (distance in X_axis from start) + (distance in Y_axis from start)? Thanks for the great ‘site and all your work =]

    • whydoidoit
      December 8, 2012 - 8:08 am Reply

      So we have the g score – which is how far we travelled to get to the square, to that we add the best guess at how far it is to the target. The normal heuristic is to take the straight-line distance, but that requires a square root, so I use the Manhattan distance.

      This is what A* uses to try to use the shortest path first – it uses the node with the lowest fscore each loop – so it uses the node which is likely to be closest to the goal presuming that there was a straight line path.

      If you look at the animation you can see how it works – try to get there straight if that fails it will restart near the beginning because that will now have the lowest fscore and still be open. The grid images show a simply case of the seeking function around a corner.

      Does that help?

      • Jay Kay
        December 9, 2012 - 7:35 am Reply

        Thankyou, yes. Though I am following your flow diagram rather than the package, and trying to implement it in a ‘node’ based array rather than the cube mesh. Following the diagram, my first was **Add the node to the closed set**. At this point I assumed that the same node would be removed from the open list. and then several out of range errors popped up. After clearing them, I then found that I hadn’t done any calculations for the G value. I have posted a question on Unity Answers if you wouldn’t mind having a look. Or would it be possible to update the flow diagram to show where to calculate the G value? I have tried researching (eg http://www.policyalmanac.org/games/aStarTutorial.htm) but for some reason am getting confused on the simplest things :(

  • whydoidoit
    December 9, 2012 - 7:55 am Reply

    Well yes, there is a step in the flow chart that says when to add the open node to the closed set (basically you will remove the node with the lowest F score from the open set and add it to the closed set at the start of your seeking loop).

    The temporary cost step in the flow chart is the proposed G score for the neighbour (you then compare it with the G score in the open set and only update the route if this temporary cost is lower). In my code I initialise G scores with very high initial values of G so that I can always do a simple compare. This means that you are always looking for the lowest cost of reaching each node – you will find many ways of reaching a node but only one will be optimal.

    On the first time of adding an node to the open list, and each time you find a cheaper way of reaching it, you update the open set node’s G score and tell it where it came from in order to get that score. (That last bit doesn’t appear clearly in my flow chart).

    Generally speaking you want to be using Dictionaries not arrays for as much as possible – poor A* implementations fall apart due to the cost of the lookup operations in the open and closed sets.

  • Bonifatius G. K
    December 11, 2012 - 5:57 pm Reply

    hello, i’ve tried to implement that A* code to my script with a little bit tune up to make it suitable with my script. Actually i’m using waypoints as moving reference for NPC(enemy)’s moving AI, but each time my player character moves, this enemy character will stop by it self. this is happening each time the enemy character use my player character as the A* target, i’ve tried to remove the “LastDestination” code, but still it stop when the player character moves.
    thx a lot…

    • whydoidoit
      December 11, 2012 - 6:02 pm Reply

      I’d guess that you need to maintain moving on the old path until the new one is ready – just don’t clear the path until the new one arrives, then set it up there.

      • Bonifatius G. K
        December 12, 2012 - 12:56 pm Reply

        yeah, that;s works. But now a new problems come, i need to re-check if there is a obstacle beetwen nodes when adding neighbour to openSet, since no raycast can be done outside the mainthead, what can i use then…
        thx..

  • Terry Morgan
    January 10, 2013 - 2:35 am Reply

    I’ve download the unitypackage twice, 1st time 45 mb 2nd time 3mb, wonder why it doesn’t
    download.

  • Terry Morgan
    January 14, 2013 - 4:05 pm Reply

    Just a few things missing in my package
    Loads in unity 4, but I put the fixes in unity 3.5
    drag muktargame into scene from monstergame/character1/muktargame

    make a sphere or move the 1st person controller to where the sphere goes.
    Walkable layer must be flagged ‘Terrain’ on the cube’s build graph script, also cell size 0.5 width 302 height 335,
    this is the x and z dimension, click ‘scan’

    click on ‘layer’ in inspector ‘add layer
    used layer 8 named ‘Terrain’

    I got the muktar to follow a sphere, but it won’t follow a
    fps character controller.

  • Kinos
    January 25, 2013 - 10:10 pm Reply

    When is part two coming. I need some more help with this.

    • whydoidoit
      January 26, 2013 - 8:55 am Reply

      Unfortunately deadlines in my day job are causing me to spend every waking moment working – I have 2/3 of the project written, just need the time to finish it and write the article.

  • Joseph
    January 26, 2013 - 2:47 pm Reply

    Patiently anticipating part 2 :)

    • agent
      April 9, 2013 - 3:32 pm Reply

      Me too, can’t wait :)

  • Joseph
    June 2, 2013 - 8:48 pm Reply

    Whan is part 2 coming? Feels like a novel that has the last chapter missing. :(

  • Deadnote
    July 4, 2013 - 8:23 am Reply

    hello, i want to ask, why a mukhtar gameobject can collide with sphre? how can i make mukhtar gameobject collide with another gameobject? please help

  • Tomas
    September 3, 2013 - 12:58 am Reply

    Hello

    I tried to execute the A* in the background thread … even using your “Loom approach”, but Unity keeps complaining. Do you have any idea how to proceed? In my game I have 300+ separate units that need to plan their movement and when counting A* I get serious glitches.

    Thanks!

    • whydoidoit
      September 3, 2013 - 2:52 am Reply

      Nothing in the Pathfinding must touch any Unity object for that to work. Usually you have to take data and put it in your own classes and arrays. It might just be where you pass back the route?

  • gorda
    October 6, 2013 - 9:49 pm Reply

    hello, i got something to ask.
    how can our enemy automaticaly seeking path without waiting the player to stop its movement.
    i mean, its automatically follow the player.
    thanks in advance

    • whydoidoit
      October 6, 2013 - 11:58 pm Reply

      Well the best plan would be to seek a path to the players current or predicted location frequently (but not every frame).

      So perhaps every couple of seconds it takes the players current position and movement vector and predicts the players location then seeks a path to that.

  • kaku
    October 11, 2013 - 12:48 am Reply

    So I imported the package and opened the scene “Medieval Village”. There seems to be something wrong. The Cube game object has no “BuildGraph” script attached to it. There is no enemy in the scene hierarchy either. Is that right ? Is there a list of steps I should follow before I can run it ? I am using Unity 4 btw.

    • LaidMonkey
      April 6, 2014 - 9:00 pm Reply

      I have same problem, already found a solution ?

  • Nekark
    March 6, 2014 - 2:43 am Reply

    Thanks a lot for this tutorial, help me to understand the A*. I’m still “processing” the code.

    I hope you have time for the #2.

    Thanks again!!

    • whydoidoit
      March 6, 2014 - 6:07 am Reply

      Me too – I really want to write the tutorial as the basis of a game I am going to make.

  • Hurricane
    August 12, 2014 - 10:45 am Reply

    I have the same problem like Kaku and LaidMonkey. I opened the Medieval Village scene with Unity 4, but I didn’t find the “muktargame” and the “Cube” doesn’t have the “Build Graph” script. I tried to add them and followed your video, but it doesn’t work. Does your method work only for Unity 3?

    • whydoidoit
      August 12, 2014 - 11:29 am Reply

      I haven’t checked it in4 but I well

  • Varun V P
    September 3, 2014 - 2:13 pm Reply

    Hey, It’s a really nice tutorial.

    I wanted to know if these scripts are possible to run on mobile platforms? The script for baking the NavMesh could be done on PC, but what about the Seeker and BuildGraph scripts?? Also, it would be nice if you could have a separate section on mobile game development, or at least mention what modifications should be done to implement these projects on mobile. Thanks ;-)

    • whydoidoit
      September 4, 2014 - 6:34 pm Reply

      I use it in a couple of mobile games – seems to work pretty well. I guess I’ve probably made a couple of optimisations in my own code but nothing significant.

Leave a Reply

%d bloggers like this: