Collection of Game AI

Overview of Demos

The following demos listed below are all smaller milestone projects made during a semester for the course Gameplay Programming. I was given a framework by the lecturers (see credits) to create all these small AI demos. Some projects contain more features than the original assignments asked for and have additional code snippets when fit.

Flocking (Different Steering Behaviors)

Combined-, Blended- and Priority-Steering Behaviors

Starting from the basics, simple steering behaviors and linear algebra were introduced to get an AI to behave in a certain way. To seek, to flee, to pursue, or to wander are some examples of this. Those alone are not very advanced, which is why you can create more interesting cases by combining these steering behaviors and assigning weights to how much one behavior should weigh through (blended steering behaviors). To add more complexity one can also add a priority to the steering to evade an enemy in case that enemy is in range, to which the AI otherwise would follow different steering rules. 

Simple Steering Behaviours in order: Seek - Flee - Arrive - Face
Combined Steering Behaviours (Red: Seek & Wander, Yellow: Evade & Wander)

Flocking & Spatial Partitioning

Flocking is no more than a combination of specific steering behaviors with the right weights assigned to it. Each “bird” or AI agent has a certain radius around them in which other birds reside. This radius we call the neighborhood, and it’s based on that neighborhood that we calculate a blend of the following steering behaviors that influence one bird’s behavior:

Additionally, I added a feature to base the neighborhood on what each individual bird sees (with a customizable vision cone) and added additional optimization techniques. Spatial Partitioning being the first one, which divides the map into smaller partitions and updates in what partition a certain bird is. That way instead of checking every bird to make sure they’re in one neighborhood, we only need to check the birds in the partitions that overlap with our neighborhood radius. Another optimization being a memory pool to store the birds in a neighborhood instead of allocating/deallocating every time a bird changes to a different neighborhood. 

Flocking Behavior
Visualization of Debug Information and Flocking Customization

Code Snippets

Flock Update Loop
void Flock::Update(float deltaT)
{
	//Update the evading steering behaviour with newest position every frame
	m_pEvade->SetTarget(m_pAgentToEvade->GetPosition());
	
	if (m_UseSpacePartitioning)
	{
		//Loop over all the agents using space partitioning
		for (size_t i = 0; i < m_Agents.size(); ++i)
		{
			//Update which agents will affect the current agent's steering behaviour
			m_pCellSpace->RegisterNeighbors(m_Agents[i], m_NeighborhoodRadius, (i == m_IndexAgentToObserve), m_UseVisionCone, m_VisionConeAngle);
			m_Neighbors = m_pCellSpace->GetNeighbors();
			m_NrOfNeighbors = m_pCellSpace->GetNrOfNeighbors();
			
			// Update debug info for observed agent
			UpdateDebugInformation(m_Agents[i]);

			//If agent is a new cell, get him out the old one and update the new one before calculating its steering behaviour
			m_pCellSpace->UpdateAgentCell(m_Agents[i], m_OldAgentPositions[i]);

			//Calculate and apply steering behaviour 
			m_Agents[i]->Update(deltaT);

			//We need to update the old agents positions, so it carries over to the next frame as the updated old position
			m_OldAgentPositions[i] = m_Agents[i]->GetPosition();

			//To make sure agent stays within boundaries of the world
			m_Agents[i]->TrimToWorld({ 0, 0 }, { m_WorldSize, m_WorldSize });
		}
	}
	else
	{
		// Loop over all the agents
		for (SteeringAgent* pAgent : m_Agents)
		{
			//Update which agents will affect the current agent's steering behaviour
			RegisterNeighbors(pAgent);

			//Calculate and apply steering behaviour 
			UpdateDebugInformation(pAgent);
			pAgent->Update(deltaT);

			//To make sure agent stays within boundaries of the world
			pAgent->TrimToWorld({ 0, 0 }, { m_WorldSize, m_WorldSize });
		}
	}
}
Register Neighbors in Radius
void Flock::RegisterNeighbors(SteeringAgent* pObservedAgent)
{
	Elite::Vector2 agentPos = pObservedAgent->GetPosition();

	//Since we're using a memory/object pool for our neighbors, we can't know the amount of neighbors by checking the array's length
	//That's why we need to manually count the neighbors and store it in a seperate value
	m_NrOfNeighbors = 0;

	//Register the agents neighboring the currently evaluated agent
	for (SteeringAgent* pNeighborAgent : m_Agents)
	{
		if (pNeighborAgent == pObservedAgent)
			continue;

		//To avoid a square root as optimization, we take the squared distance between the agents
		//And compare it to the squared value of the radius a neighbor should be in to register 
		float distance = DistanceSquared(pNeighborAgent->GetPosition(), agentPos);
		if (distance < Square(m_NeighborhoodRadius))
		{
			//When using a vision cone to determine the neighborhood we need an additional check
			if (m_UseVisionCone && !IsTargetInVisionCone(pObservedAgent, pNeighborAgent, m_VisionConeAngle))
				continue;

			//Overwrite neighbors information and increment the number of neighbors (concept of a memory/object pool)
			m_Neighbors[m_NrOfNeighbors++] = pNeighborAgent;
		}
	}
}
Register Neighbors in Radius (Cellspace Partitioning)
void CellSpace::RegisterNeighbors(SteeringAgent* pObservedAgent, float queryRadius, bool isObservedAgent, bool useConeVision, float coneAngle)
{
	Elite::Vector2 pos = pObservedAgent->GetPosition();
	
	//Since we're using a memory/object pool for our neighbors, we can't know the amount of neighbors by checking the array's length
	//That's why we need to manually count the neighbors and store it in a seperate value
	m_NrOfNeighbors = 0;

	//Cornerpoints of square around radius
	Elite::Vector2 bottomLeft{ pos.x - queryRadius, pos.y - queryRadius };
	Elite::Vector2 topRight{ pos.x + queryRadius, pos.y + queryRadius };

	//Looping over cells to determine which cells our radius-square is overlapping with
	for (Cell& cell : m_Cells)
	{
		Elite::Rect cellSquare = cell.boundingBox;

		//First check is to see if either of the left/right corners is still between the range of the left/right corners of the square
		//Second check does the same for the bottom/upper corners
		if (cellSquare.bottomLeft.x + cellSquare.width > bottomLeft.x 
			&& cellSquare.bottomLeft.x < topRight.x)
		{
			if (cellSquare.bottomLeft.y + cellSquare.height > bottomLeft.y
				&& cellSquare.bottomLeft.y < topRight.y)
			{
				//Means it overlaps this square
				//Render the cell the observed agent overlaps
				if (m_CanDebugRenderPartitions && isObservedAgent)
					DEBUGRENDERER2D->DrawPolygon(&cell.GetRectPoints()[0], 4, { 1,1,0,1 }, -0.5f); //color yellow

				//Check this cell's agents to see if there's any agent close enough to be a neighbor
				for (SteeringAgent* pCellAgent : cell.agents)
				{
					if (pObservedAgent == pCellAgent)
						continue;

					//To avoid a square root as optimization, we take the squared distance between the agents
					//And compare it to the squared value of the radius a neighbor should be in to register 
					float distance = DistanceSquared(pos, pCellAgent->GetPosition());
					if (distance < Elite::Square(queryRadius))
					{
						//When using a vision cone to determine the neighborhood we need an additional check
						if (useConeVision && !IsTargetInVisionCone(pObservedAgent, pCellAgent, coneAngle))
							continue;

						//Overwrite neighbors information and increment the number of neighbors (concept of a memory/object pool)
						m_Neighbors[m_NrOfNeighbors++] = pCellAgent;
					}
				}
			}
		}
	}
}
Target In Vision Cone Check
bool Flock::IsTargetInVisionCone(SteeringAgent* pAgent, SteeringAgent* pTarget, float coneAngle)
{
	//Set a min dot check to determine if angle between two vectors would be in range of the cone
	m_MinDot = cos(Elite::ToRadians(coneAngle / 2.f));

	//We want to calculate the angle between the forward (steeringForce) and the vector to the agent
	Elite::Vector2 toTarget = GetNormalized(pTarget->GetPosition() - pAgent->GetPosition());
	Elite::Vector2 forward = GetNormalized(pAgent->GetLinearVelocity());

	float dot = Elite::Dot(forward, toTarget);

	return (dot >= m_MinDot);
}
Steering Behavior: Seperation
//*********************
//SEPARATION (FLOCKING)
Seperation::Seperation(Flock* pFlock, float minDistanceToSeperate, float decayCoefficient)
	: m_pFlock(pFlock)
	, m_MinDistanceToSeperate(minDistanceToSeperate)
	, m_DecayCoefficient(decayCoefficient)
	, m_SeperationVelocity()
{
}

SteeringOutput Seperation::CalculateSteering(float deltaT, SteeringAgent* pAgent)
{
	//Quick check to see if we have neighbors
	int nrOfNeighbors = m_pFlock->GetNrOfNeighbors();
	if (nrOfNeighbors == 0)
		return SteeringOutput();

	//Initialize variables needed
	float maxLinearSpeed = pAgent->GetMaxLinearSpeed();

	auto& neighbors = m_pFlock->GetNeighbors();
	Elite::Vector2 totalVelocity{};

	float nrCloseNeighbors{};
	for (int i = 0; i < nrOfNeighbors; ++i)
	{
		//Check if target is close enough to start seperating from
		Elite::Vector2 dir = pAgent->GetPosition() - neighbors[i]->GetPosition();
		float distance = dir.Magnitude();

		if (distance < m_MinDistanceToSeperate)
		{
			//Inverse square law, the closer the more we need to flee
			float strength = min(m_DecayCoefficient / (distance * distance), maxLinearSpeed);

			dir.Normalize();
			totalVelocity += strength * dir;

			++nrCloseNeighbors;
		}
	}

	//The forces are weighted depending on who we're closer to
	//Though it is no problem to take an average of this and make an average velocity to 'flee away from'
	Elite::Vector2 avgVelocity = totalVelocity / nrCloseNeighbors;
	avgVelocity.Normalize();
	avgVelocity *= maxLinearSpeed;

	//Last thing we need to do is make a velocity from where we're going to where we need to seperate to
	m_SeperationVelocity = avgVelocity - pAgent->GetLinearVelocity();
	if (m_SeperationVelocity.Magnitude() > maxLinearSpeed) //Check if we're not exceeding max speed, else we cap it
	{
		m_SeperationVelocity.Normalize();
		m_SeperationVelocity *= maxLinearSpeed;
	}

	//Set our steering to the seperation velocity
	SteeringOutput steering{};
	steering.LinearVelocity = m_SeperationVelocity;

	//Return steering
	return steering;
}
Steering Behavior: Cohesion
//*******************
//COHESION (FLOCKING)
Cohesion::Cohesion(Flock* pFlock)
	: m_pFlock(pFlock)
{
}

SteeringOutput Cohesion::CalculateSteering(float deltaT, SteeringAgent* pAgent)
{
	SteeringOutput steering{};

	//Get to target velocity
	steering.LinearVelocity = m_pFlock->GetAverageNeighborPos() - pAgent->GetPosition();
	steering.LinearVelocity.Normalize();
	steering.LinearVelocity *= pAgent->GetMaxLinearSpeed();

	return steering;
}
Steering Behavior: Alignment/Velocity Match
//*************************
//VELOCITY MATCH (FLOCKING)
VelocityMatch::VelocityMatch(Flock* pFlock)
	: m_pFlock(pFlock)
{
}

SteeringOutput VelocityMatch::CalculateSteering(float deltaT, SteeringAgent* pAgent)
{
	SteeringOutput steering{};

	//Get to target velocity
	steering.LinearVelocity = m_pFlock->GetAverageNeighborVelocity();
	steering.LinearVelocity.Normalize();
	steering.LinearVelocity *= pAgent->GetMaxLinearSpeed();

	return steering;
}

A* Pathfinding

This demo displays the power of finding the optimal path between a start- to finish-node taking into consideration a few obstacles. A* is not the only search algorithm capable of doing this, and so during the semester, we explored graph theory and a breadth-first search algorithm as well. Below you see a gif showing what this demo does.

Grey tiles are neutral nodes with no additional cost of traveling.
Brown tiles are considered mud and so take longer to travel over.
Blue tiles are considered water and cannot be crossed whatsoever.

Code Snippets

A* Pathfinding Algorithm on Templated Nodes/Grid
template <class T_NodeType, class T_ConnectionType>
std::vector<T_NodeType*> AStar<T_NodeType, T_ConnectionType>::FindPath(T_NodeType* pStartNode, T_NodeType* pGoalNode)
{
	//1. Starting variables
	vector<T_NodeType*> path;
	vector<NodeRecord> openList;
	vector<NodeRecord> closedList;
	NodeRecord currentRecord;

	//Kickstart the loop with this startRecord
	NodeRecord startRecord;
	startRecord.pNode = pStartNode;
	startRecord.pConnection = nullptr;
	startRecord.estimatedTotalCost = GetHeuristicCost(pStartNode, pGoalNode);

	openList.push_back(startRecord);

	bool isInvalidPath = true;
	//2. We will continue searching for a connection that leads to the end node
	while (!openList.empty())
	{
		//2.A Get connection with lowest f-score
		auto it = std::min_element(openList.begin(), openList.end());
		currentRecord = *it;
		
		//2.B Check if it leads to end node
		if (currentRecord.pNode == pGoalNode)
		{
			isInvalidPath = false;
			break;
		}
			
		//2.C If not get all connections of the connection's end node and loop over them
		int currentIndex = currentRecord.pNode->GetIndex();
		auto connList = m_pGraph->GetNodeConnections(currentIndex);
		for (T_ConnectionType* conn : connList)
		{
			//2.D Closed List check
			if (IsCheaperConnectionInList(conn, closedList, currentRecord.costSoFar))
				continue;

			//2.E Open List check
			if (IsCheaperConnectionInList(conn, openList, currentRecord.costSoFar))
				continue;

			//2.F Create nodeRecord and push to list
			NodeRecord nodeRecord;

			//Making sure we're getting the node this connection is looking at
			nodeRecord.pNode = m_pGraph->GetNode(conn->GetTo()); 
			nodeRecord.pConnection = conn;

			//Need to take into account the previous gCost to get here plus the current connCost
			nodeRecord.costSoFar = currentRecord.costSoFar + conn->GetCost();
			
			//fCost = gCost + hCost
			nodeRecord.estimatedTotalCost = nodeRecord.costSoFar + GetHeuristicCost(nodeRecord.pNode, pGoalNode); 
			
			//Put nodeRecord in list and repeat process
			openList.push_back(nodeRecord);
		}

		//2.G Add it to the closedList
		closedList.push_back(currentRecord);

		//2.G Remove nodeRecord from openList 
		auto itToRemove = std::remove_if(openList.begin(), openList.end(), [&currentRecord](const NodeRecord& nr) { return currentRecord == nr; });
		if (itToRemove != openList.end())
			openList.erase(itToRemove);	
	}

	if (isInvalidPath)
		return path;

	//3. Reconstruct path from last connection to start node
	// The last connection you made in the for loop to fill up the closedList should be the connection to the goal node (=currentRecord)
	// Work back from this connection (using the get from) to the start node
	while (currentRecord.pNode != pStartNode)
	{
		path.push_back(currentRecord.pNode);
		for (NodeRecord& nr : closedList)
		{
			if (nr.pNode->GetIndex() == currentRecord.pConnection->GetFrom())
			{
				currentRecord = nr;
				break;
			}
		}
	}

	//Don't forget to put in the start node + reverse it
	path.push_back(pStartNode);
	std::reverse(path.begin(), path.end());

	return path;
}

Navigation Meshes

Something similar but more advanced than A* pathfinding would be finding a path around geometric meshes, in other words using navigation meshes. Here we need to simplify the representation of our world into a walkable triangulated surface for our AI. As said before, the framework to create this demo was given at the start, as well as the extra ear clipping algorithm that creates this triangulated world for us. What was left was implementing the navigation mesh pathfinding based on this new data.

The previously mentioned A* served as the main pathfinding technique between each segment’s center (path node) that is found on the way. The simple stupid funnel algorithm was used to find the corner points along the way (portals) and so find a more straightforward optimized path (see visualization).

Code Snippets

Finding Path Algorithm
std::vector<Elite::Vector2> App_NavMeshGraph::FindPath(Elite::Vector2 startPos, Elite::Vector2 endPos)
{
	//Create the path to return
	std::vector<Elite::Vector2> finalPath{};

	//Get the start and endTriangle
	const Elite::Triangle* startTriangle = m_pNavGraph->GetNavMeshPolygon()->GetTriangleFromPosition(startPos);
	const Elite::Triangle* endTriangle{ m_pNavGraph->GetNavMeshPolygon()->GetTriangleFromPosition(endPos) };
	
	//If we DONT have valid start/end triangles return empty 
	if (!startTriangle || !endTriangle)
		return finalPath;

	//If they are the same -> return with end node added in list
	if (startTriangle == endTriangle)
	{
		finalPath.push_back(endPos);
		return finalPath;
	}

	//=> Start looking for a path
	//Copy the graph
	auto graphCopy = m_pNavGraph->Clone();
	
	//Create extra node for the Start Node (Agent's position)
	Elite::NavGraphNode* startNode = AddNavigationPathNode(graphCopy.get(), m_pNavGraph, startPos);

	//Create extra node for the End Node
	Elite::NavGraphNode* endNode = AddNavigationPathNode(graphCopy.get(), m_pNavGraph, endPos);

	//Run A star on new graph
	auto pathfinder = AStar<NavGraphNode, GraphConnection2D>(graphCopy.get(), Elite::HeuristicFunctions::Euclidean);
	auto path = pathfinder.FindPath(startNode, endNode);
	
	//Debug Nodes or not
	if (sDrawNonOptimisedPath)
	{
		m_NrDebugNodePositions = (int)path.size();
		for (size_t i = 0; i < path.size(); ++i)
		{
			m_DebugNodePositions[i] = path[i]->GetPosition();
		}
	}

	//Extra: Run optimiser on new graph, Make sure the A star path is fine before uncommenting this!
	m_Portals = SSFA::FindPortals(path, m_pNavGraph->GetNavMeshPolygon());
	finalPath = SSFA::OptimizePortals(m_Portals);

	return finalPath;
}
Adding a Navigation Path Node
Elite::NavGraphNode* App_NavMeshGraph::AddNavigationPathNode(Elite::IGraph<Elite::NavGraphNode, Elite::GraphConnection2D>* pGraphCopy, Elite::NavGraph* pNavGraph, const Elite::Vector2& pos)
{
	//Create node and add to copy of graph
	//Note: lineIndex isn't used in SSFA and generally the start/end nodes' lineIndex isn't used
	int nextFreeIndex = pGraphCopy->GetNextFreeNodeIndex();
	int lineIndex = -1; 
	Elite::NavGraphNode* node = new Elite::NavGraphNode(nextFreeIndex, lineIndex, pos);
	pGraphCopy->AddNode(node);
	
	//Determine what triangle this pos is in, and connect our start node with the nodes in this triangle
	// = our neighboring nodes
	const Elite::Triangle* triangle = pNavGraph->GetNavMeshPolygon()->GetTriangleFromPosition(pos);

	for (int lineIndex : triangle->metaData.IndexLines)
	{
		int nodeIndex = pNavGraph->GetNodeIdxFromLineIdx(lineIndex);
		if (nodeIndex == invalid_node_index)
			continue;

		pGraphCopy->AddConnection(new Elite::GraphConnection2D
		(
			nextFreeIndex,
			nodeIndex,
			Distance(node->GetPosition(), pGraphCopy->GetNode(nodeIndex)->GetPosition())
		));
	}
	return node;
}
Optimizing Path through Simple Stupid Funnel Algorithm
static std::vector<Elite::Vector2> OptimizePortals(const std::vector<Portal>& portals)
{
	//P1 == right point of portal, P2 == left point of portal
	std::vector<Elite::Vector2> vPath = {};
	auto apex = portals[0].Line.p1;
	auto apexIndex = 0, leftLegIndex = 1, rightLegIndex = 1;
	auto rightLeg = portals[rightLegIndex].Line.p1 - apex;
	auto leftLeg = portals[leftLegIndex].Line.p2 - apex;

	for (unsigned int i = 1; i < static_cast<unsigned int>(portals.size()); ++i)
	{
		//Local
		const auto& portal = portals[i];

		//--- RIGHT CHECK ---
		//1. See if moving funnel inwards - RIGHT
		Vector2 right = portal.Line.p1 - apex;
		if (Cross(rightLeg, right) >= 0)
		{
			//2. See if new line degenerates a line segment - RIGHT
			if (DistanceSquared(apex, rightLeg) < FLT_EPSILON || Cross(right, leftLeg) > 0) //if not crossing over left leg
			{
				rightLeg = right;
				rightLegIndex = i;
			}
			else //crossing over left leg
			{
				//Left leg becomes new apex (= apex + leftLeg)
				apex += leftLeg;
				apexIndex = leftLegIndex;

				//"reset" portal and create new iterator
				unsigned int newIt = apexIndex + 1;

				i = newIt;
				leftLegIndex = newIt;
				rightLegIndex = newIt;

				//Right over left -> insert left in path and restart the loop from portal left point (= apex right now)
				vPath.push_back(apex);
				if (newIt < static_cast<unsigned int>(portals.size()))
				{
					rightLeg = portals[rightLegIndex].Line.p1 - apex;
					leftLeg = portals[leftLegIndex].Line.p2 - apex;
					continue;
				}
			}
		}
			
		//--- LEFT CHECK ---
		//1. See if moving funnel inwards - LEFT
		Vector2 left = portal.Line.p2 - apex;
		if (Cross(leftLeg, left) <= 0)
		{
			//2. See if new line degenerates a line segment - LEFT
			if (DistanceSquared(apex, rightLeg) < FLT_EPSILON || Cross(left, rightLeg) < 0) //not crossing
			{
				leftLeg = left;
				leftLegIndex = i;
			}
			else //crossing
			{
				//Right leg becomes new apex (= apex + rightLeg)
				apex += rightLeg;
				apexIndex = rightLegIndex;

				//"reset" portal and create new iterator
				unsigned int newIt = apexIndex + 1;

				i = newIt;
				leftLegIndex = newIt;
				rightLegIndex = newIt;

				//Left over right -> insert right to path and restart the loop from portal right point
				vPath.push_back(apex);
				if (newIt < static_cast<unsigned int>(portals.size()))
				{
					rightLeg = portals[rightLegIndex].Line.p1 - apex;
					leftLeg = portals[leftLegIndex].Line.p2 - apex;
					continue;
				}
			}
		}
	}
	
	// Add last path point (You can use the last portal p1 or p2 points as both are equal to the endPoint of the path
	vPath.push_back(portals.back().Line.p1);
	return vPath;
}

Agar.io in 3 Ways

About the game Agar.io

For anyone unfamiliar with the game, the idea is pretty simple. The player controls a colored circle that grows bigger upon overlapping with other smaller circles (that being food circles that do nothing or enemy circles searching for prey as well). Your main goal is to become as big as you can and hopefully remain in the apex circle for as long as possible. There are additional mechanics like splitting yourself up into smaller circles and launching yourself at enemies, but those aren’t implemented in this demo.

Agar.io (Finite State Matches)

The first approach to this AI is by using finite state machines. The concept of my recreation will be to create a field of dumb AI’s that just wander around, alongside one smarter AI. This smart AI understands the goal and knows how to seek, wander, evade when appropriate. Using finite state machines, we can track certain states of the game and bind them to transition into actions. For example, the AI starts in a wandering state. If he sees a smaller enemy circle is in range, his state changes, and transitions into pursuing this enemy. If we’re in a pursuing state but a bigger enemy comes very close, we transition into an evading state and run away. At last, if enemies are too small and too fast to catch, we either seek food or keep wandering. Each state can define transitions to other states and works pretty well for a smaller amount of possibilities. See the code snippet below!

White indicates seeking food, green indicates pursuing smaller enemies and red indicates bigger enemies

Code Snippets FSM

Setup of FSM
//Create Uber Agent
Elite::Vector2 randomPos = randomVector2(-m_TrimWorldSize, m_TrimWorldSize);
Color customColor = Color{ randomFloat(), randomFloat(), randomFloat() };
m_pUberAgent = new AgarioAgent(randomPos, customColor);

//Need to add Smart Finite State Machine
//Create blackboard
Blackboard* pBlackboard = new Blackboard();
pBlackboard->AddData("Agent", m_pUberAgent);
pBlackboard->AddData("FoodVec", &m_pFoodVec);
pBlackboard->AddData("ClosestFood", m_pFoodVec[0]);
pBlackboard->AddData("EnemyVec", &m_pAgentVec);
pBlackboard->AddData("ClosestSmallEnemy", m_pAgentVec[0]);
pBlackboard->AddData("ClosestBigEnemy", m_pAgentVec[0]);

//Create FSM with default state
FiniteStateMachine* pSmartFSM = new FiniteStateMachine(pWanderState, pBlackboard);

//===Priority 1: EVADING BIG ENEMIES===
//=====================================
//Create transition: Wander -> EvadeBigEnemy (if big enemy is nearby)
pSmartFSM->AddTransition(pWanderState, pEvadeBiggestEnemy, pHasBiggerEnemyCloseby);

//Create transition: EvadeBigEnemy -> Wander (if no small enemy is nearby)
pSmartFSM->AddTransition(pEvadeBiggestEnemy, pWanderState, pHasNoBiggerEnemyCloseby);

//Create transition: SeekFood -> EvadeBigEnemy (if big enemy is nearby)
pSmartFSM->AddTransition(pSeekFoodState, pEvadeBiggestEnemy, pHasBiggerEnemyCloseby);

//Create transition: PursueSmallEnemy -> EvadeBigEnemy (if big enemy is nearby)
pSmartFSM->AddTransition(pPursueSmallestEnemy, pEvadeBiggestEnemy, pHasBiggerEnemyCloseby);


//===Priority 2: PURSUEING SMALLEST ENEMY===
//==========================================

//Create transition: Wander -> PursueSmallEnemy (if small enemy is nearby)
pSmartFSM->AddTransition(pWanderState, pPursueSmallestEnemy, pHasSmallerEnemyNearby);

//Create transition: PursueSmallEnemy -> Wander (if no small enemy is nearby)
pSmartFSM->AddTransition(pPursueSmallestEnemy, pWanderState, pHasNoSmallerEnemyCloseby);

//Create transition: SeekFood -> PursueSmallEnemy (if small enemy is nearby)
pSmartFSM->AddTransition(pSeekFoodState, pPursueSmallestEnemy, pHasSmallerEnemyNearby);


//===Priority 3: SEEKING FOOD===
//==============================
//Create transition: Wander -> SeekFood (if food is nearby)
pSmartFSM->AddTransition(pWanderState, pSeekFoodState, pHasFoodCloseby);

//Create transition: SeekFood -> Wander (if no food is nearby)
pSmartFSM->AddTransition(pSeekFoodState, pWanderState, pHasNoFoodCloseby);

//Create transition: SeekFood -> SeekFood (if closer food is nearby)
pSmartFSM->AddTransition(pSeekFoodState, pSeekFoodState, pHasCloserFoodNearby);


m_pUberAgent->SetDecisionMaking(pSmartFSM);
Example State/Transition Class (Seeking Food)
//******************
//Seeking Food State
class SeekFoodState : public Elite::FSMState
{
public:
	SeekFoodState() : FSMState()/*, m_pEmptyFood(new AgarioFood({ FLT_MAX, FLT_MAX }))*/ {};

	virtual void OnEnter(Blackboard* pBlackboard) override
	{
		AgarioAgent* pAgent = nullptr;
		bool dataAvailable = pBlackboard->GetData("Agent", pAgent);
		if (!dataAvailable || !pAgent)
			return;

		AgarioFood* pClosestFood = nullptr;
		dataAvailable = pBlackboard->GetData("ClosestFood", pClosestFood);
		if (!dataAvailable || !pClosestFood)
			return;

		pAgent->SetToSeek(pClosestFood->GetPosition());
	}
};

//******************
//Transition to Seek Food
class HasFoodCloseby : public Elite::FSMTransition
{
public:
	HasFoodCloseby() : FSMTransition() {};
	void SetAdditionalRange(float multiplier) { m_AdditionalRange = multiplier; }
	void SetRadiusColor(const Elite::Color& color) { m_Color = color; }

	virtual bool ToTransition(Blackboard* pBlackboard) const override
	{
		//Used for checks
		bool isAvailable;

		//Get agent from blackboard
		AgarioAgent* pUberAgent = nullptr;
		isAvailable = pBlackboard->GetData("Agent", pUberAgent);
		if (!isAvailable || !pUberAgent)
			return false;

		//Get food info from black
		std::vector<AgarioFood*>* pFoodVec = nullptr;
		isAvailable = pBlackboard->GetData("FoodVec", pFoodVec);
		if (!isAvailable || !pFoodVec)
			return false;

		//White circle
		DEBUGRENDERER2D->DrawCircle(pUberAgent->GetPosition(), (pUberAgent->GetRadius() + m_AdditionalRange), m_Color, -.4f);

		//We need the food that is most nearby (within radius)
		auto minIt = std::min_element(pFoodVec->begin(), pFoodVec->end(), [this, &pUberAgent](AgarioFood* lhs, AgarioFood* rhs)
			{
				return (Elite::DistanceSquared(lhs->GetPosition(), pUberAgent->GetPosition()) < Elite::DistanceSquared(rhs->GetPosition(), pUberAgent->GetPosition()));
			});

		//No food exists
		if (minIt == pFoodVec->end())
			return false;

		//Food will be destroyed so no food closeby -> return false
		AgarioFood* pClosestFood = *minIt;
		if (pClosestFood->CanBeDestroyed())
			return false;

		//If food is in radius 
		if (Elite::DistanceSquared(pClosestFood->GetPosition(), pUberAgent->GetPosition()) < Elite::Square(m_AdditionalRange + pUberAgent->GetRadius()))
		{
			//After that, we can conclude the food is closeby enough
			pBlackboard->ChangeData("ClosestFood", pClosestFood);

			//Draw circle around our target in color of our function
			DEBUGRENDERER2D->DrawSolidCircle(pClosestFood->GetPosition(), 2.f, { 0,0 }, m_Color);

			//Return true
			return true;
		}
		return false;
	}

protected:
	float m_AdditionalRange = 25.f;
	Elite::Color m_Color = Elite::Color(1.f, 1.f, 1.f);
};

Agar.io (Behavior Trees)

A second approach, and similar concept, is by using a behavior tree. Instead of being in a state and checking conditions to transition to a new state, here we start from a tree of possibilities, ordered by importance, to find the behavior we’re looking for. For example, in this AI our first priority is to evade big enemies that can kill us, if this is not the case we move down a sequence and check if we should be pursuing any small enough enemies to catch. Again we move down a sequence in case this is not possible and check if there’s any food we can seek. Lastly, if none of these cases apply we just wander. Behavior trees can be superior to finite state machines in case of many different cases and possibilities. A behavior tree can be ordered by importance while with finite state machines you would get a spaghetti of transition possibilities.

As an addition to the previous implementation, the radiuses can be customized and the game moves a bit faster.

Yellow indicates food, blue indicates pursuing smaller enemies and red indicates evading bigger enemies

Code Snippets Behavior Trees

Setup of Behavior Tree
//-------------------
//Create Custom Agent
//-------------------
Elite::Vector2 randomPos = randomVector2(-m_TrimWorldSize, m_TrimWorldSize);
Color customColor = Color{ randomFloat(), randomFloat(), randomFloat() };
m_pUberAgent = new AgarioAgent(randomPos, customColor);	

//Add decision making for uber agent
Blackboard* pBlackboard = CreateBlackboard(m_pUberAgent);
BehaviorTree* pBehaviorTree = new BehaviorTree(pBlackboard, new BehaviorSelector(
	{
		//Evading first priority (always evades closest bigger enemy)
		new BehaviorSequence(
			{
				new BehaviorConditional(IsCloseToBiggerEnemy),
				new BehaviorAction(ChangeToEvade)
			}),

		//Pursueing smaller enemies secondly (doesn't pursue enemies that are too small)
		new BehaviorSequence(
			{
				new BehaviorConditional(IsCloseToSmallerEnemy),
				new BehaviorAction(ChangeToPursue)
			}),

		//Seeks foods last 
		new BehaviorSequence(
			{
				new BehaviorConditional(IsCloseToFood),
				new BehaviorAction(ChangeToSeek)
			}),

		new BehaviorAction(ChangeToWander)
	}));

m_pUberAgent->SetDecisionMaking(pBehaviorTree);
Example Action/Conditional Class (Evading Bigger Enemies)
//-----------------------------------------------------------------
// Conditionals
//-----------------------------------------------------------------
bool IsCloseToBiggerEnemy(Elite::Blackboard* pBlackboard)
{
	//Variables to store data in
	AgarioAgent* pAgent = nullptr;
	std::vector<AgarioAgent*>* pAgentVec = nullptr;
	float* evadeRange{};

	//Check if data is available 
	auto dataAvailable = pBlackboard->GetData("Agent", pAgent) &&
		pBlackboard->GetData("AgentsVec", pAgentVec) && pBlackboard->GetData("EvadeRange", evadeRange);

	if (!dataAvailable || !pAgent || !pAgentVec)
		return false;

	//Radius
	float totalRange = (*evadeRange) + pAgent->GetRadius();

	//If not we can continue and start drawing a circle
	DEBUGRENDERER2D->DrawCircle(pAgent->GetPosition(), totalRange, { 1.f, 0.f, 0.f }, -.4f);

	//Make list for all enemies to evade
	AgarioAgent* pBiggestEnemy = nullptr;
	float closestDistance{ FLT_MAX };
	for (AgarioAgent* pEnemyAgent : *pAgentVec)
	{
		//Check if enemy is bigger
		float enemyRadius = pEnemyAgent->GetRadius();
		float radiusDiff = enemyRadius - pAgent->GetRadius();
		if (radiusDiff >= 1.f)
		{
			//Check if enemy is in range (overlaps enemy)
			float distance = Distance(pAgent->GetPosition(), pEnemyAgent->GetPosition()) - enemyRadius;
			if (distance < closestDistance && distance < totalRange)
			{
				closestDistance = distance;
				pBiggestEnemy = pEnemyAgent;
			}
		}
	}

	//If there's no bigger enemy's, we return early
	if (!pBiggestEnemy)
		return false;

	//If an enemy exists -> mark its position, change target and return true so it can trigger the SetToPursue function
	DEBUGRENDERER2D->DrawSolidCircle(pBiggestEnemy->GetPosition(), pBiggestEnemy->GetRadius(), { 0.f, 0.f }, { 1.f, 0.f, 0.f }, -.4f);
	pBlackboard->ChangeData("Target", pBiggestEnemy->GetPosition());
	return true;
}

//-----------------------------------------------------------------
// Actions
//-----------------------------------------------------------------
BehaviorState ChangeToEvade(Elite::Blackboard* pBlackboard)
{
	AgarioAgent* pAgent = nullptr;
	auto dataAvailable = pBlackboard->GetData("Agent", pAgent);

	if (!pAgent)
		return Failure;

	//Get target pos
	Elite::Vector2 targetPos{};
	pBlackboard->GetData("Target", targetPos);

	//TODO: Implement Change to evade (Target)
	pAgent->SetToEvade(targetPos);
	return Success;
}

Agar.io (Influence Maps)

A final approach was to use influence maps, something quite different. Since I was familiar with spatial partitioning before, I created a grid that consisted out many cells first as our playground. The idea behind the influence maps is to send out waves of influence on different parts of the map similar to how a drop of water sends out surrounding waves where it hits the surface. The influence that propagates on the map can be either positive or negative in this scenario. The positive influence of course is something we want to chase, and the negative influence is something we want to stay away from. In this demo, it is also possible to change the amount of momentum the influence has, how fast it decays and how fast it propagates. The amount of influence every single variable like food or enemies sends out can also be customized to create a sense of prioritization.

Yellow indicates the cells we check influence for, blue indicates positive influence we want to chase and orange indicates negative influence we want to stay away from

Code Snippets Influence Maps

Setup of Behavior Tree + Influence Map
//Add decision making for uber agent
Blackboard* pBlackboard = CreateBlackboard(m_pUberAgent);
BehaviorTree* pBehaviorTree = new BehaviorTree(pBlackboard, new BehaviorSelector(
	{
		//Analyse area first
		new BehaviorAction(AnalyseArea),

		//If negative influence is surrounding -> evade it
		new BehaviorSequence(
			{
				new BehaviorConditional(IsCloserToNegativeArea),
				new BehaviorAction(ChangeToEvadeInfluence)
			}),

		//If positive influence is surrounding -> seek it
		new BehaviorSequence(
			{
				new BehaviorConditional(IsCloserToPositiveArea),
				new BehaviorAction(ChangeToSeekInfluence)
			}),

		//Else keep wandering until influence is found
		new BehaviorAction(ChangeToWandering)
	}));

m_pUberAgent->SetDecisionMaking(pBehaviorTree);
Analysis of World Influence
//-----------------------------------------------------------------
// Analysis 
//-----------------------------------------------------------------
BehaviorState AnalyseArea(Elite::Blackboard* pBlackboard)
{
	//Data storage
	AgarioAgent* pUberAgent = nullptr;
	Elite::InfluenceMap<Elite::GridGraph<Elite::InfluenceNode, Elite::GraphConnection>>* pInfluenceGrid = nullptr;
	float* radius{};
	bool* debugRender{};
	int cellSize{};

	//Getting Data
	bool isAvailable =
		pBlackboard->GetData("Agent", pUberAgent)
		&& pBlackboard->GetData("InfluenceGrid", pInfluenceGrid)
		&& pBlackboard->GetData("Radius", radius)
		&& pBlackboard->GetData("DebugRender", debugRender)
		&& pBlackboard->GetData("CellSize", cellSize);

	//Checking if data available
	if (!isAvailable || !pUberAgent || !pInfluenceGrid || !radius || !debugRender || !cellSize)
		return Failure;

	//Cornerpoints of square around radius of uber agent
	Vector2 pos = pUberAgent->GetPosition();
	float totalRadius = pUberAgent->GetRadius() + (*radius);
	if (*debugRender) DEBUGRENDERER2D->DrawCircle(pos, totalRadius, { 1,1,1 }, -0.3f);
	Elite::Vector2 radiusBottomLeft{ pos.x - totalRadius, pos.y - totalRadius };
	Elite::Vector2 radiusTopRight{ pos.x + totalRadius, pos.y + totalRadius };

	//Data
	auto pInfluenceNodes = pInfluenceGrid->GetAllNodes();
	int hCellSize = cellSize / 2;

	Vector2 highestInfPos{};
	float highestInfluence{ FLT_MIN };

	Vector2 lowestInfPos{};
	float lowestInfluence{ FLT_MAX };
	float totalInfluence{};
	int count{};

	//Checking what influence nodes are affected within radius
	for (InfluenceNode* pNode : pInfluenceNodes)
	{
		Vector2 nodePos = pInfluenceGrid->GetNodeWorldPos(pNode->GetIndex());
		Vector2 nodeBottomLeft{ nodePos.x - hCellSize, nodePos.y - hCellSize };

		if (nodeBottomLeft.x + cellSize > radiusBottomLeft.x
			&& nodeBottomLeft.x < radiusTopRight.x)
		{
			if (nodeBottomLeft.y + cellSize > radiusBottomLeft.y
				&& nodeBottomLeft.y < radiusTopRight.y)
			{
				//Cell is in agent radius 
				if (*debugRender)
				{
					Vector2 nodeTopRight{ nodePos.x + hCellSize, nodePos.y + hCellSize };
					std::vector<Elite::Vector2> points =
					{
						{ nodeBottomLeft.x, nodeBottomLeft.y },
						{ nodeTopRight.x, nodeBottomLeft.y },
						{ nodeTopRight.x, nodeTopRight.y },
						{ nodeBottomLeft.x, nodeTopRight.y }
					};
					DEBUGRENDERER2D->DrawPolygon(&points[0], 4, { 1, 1, 0, 0.5f }, -0.2f);
				}

				//Gather influence data
				float tempInfluence = pNode->GetInfluence();
				totalInfluence += tempInfluence;
				count++;

				if (tempInfluence > highestInfluence)
				{
					highestInfluence = tempInfluence;
					highestInfPos = nodePos;
				}

				if (tempInfluence < lowestInfluence)
				{
					lowestInfluence = tempInfluence;
					lowestInfPos = nodePos;
				}
			}
		}
	}


	//Setting other blackboard data
	float avgInfluence = totalInfluence / float(count);
	pBlackboard->ChangeData("HighestInfPos", highestInfPos);
	pBlackboard->ChangeData("HighestInf", highestInfluence);
	pBlackboard->ChangeData("LowestInfPos", lowestInfPos);
	pBlackboard->ChangeData("LowestInf", lowestInfluence);
	pBlackboard->ChangeData("AvgInfluence", avgInfluence);

	//The reason for returning failure:
	// -> it still continues to loop over the other actions/sequences without breaking after the analysis 
	return Failure;
}
Example of Action/Conditional Class (Evade Negative Influence)
//-----------------------------------------------------------------
// Conditionals
//-----------------------------------------------------------------
bool IsCloserToNegativeArea(Elite::Blackboard* pBlackboard)
{
	//Data vars
	AgarioAgent* pAgent = nullptr;
	float avgInfluence{};
	bool* debugRender{};

	//Getting data and checking if available
	bool isAvailable = pBlackboard->GetData("AvgInfluence", avgInfluence)
		&& pBlackboard->GetData("DebugRender", debugRender)
		&& pBlackboard->GetData("Agent", pAgent);

	if (!isAvailable || !avgInfluence || !debugRender || !pAgent)
		return false;

	//Determine if the area is mainly negative -> so evade the negative source
	if (avgInfluence < 0.f)
	{
		//Meaning we're more near a negative area -> so evade the negative source
		Vector2 pos{};
		float lowestInf{};

		isAvailable = pBlackboard->GetData("LowestInfPos", pos) && pBlackboard->GetData("LowestInf", lowestInf);
		if (!isAvailable)
			return false;

		//Considered strong enough to start evading, else keep wandering
		if (lowestInf < -25.f)
		{
			//Debug rendering
			if (*debugRender)
			{
				Vector2 agentPos = pAgent->GetPosition();
				Vector2 toPos = pos - agentPos;
				float distance = toPos.Normalize();
				DEBUGRENDERER2D->DrawDirection(agentPos, toPos, distance, { 1,0,0 }, -0.5f);
			}

			pBlackboard->ChangeData("Target", pos);
			return true;
		}
		return false;
	}
	return false;
}

//-----------------------------------------------------------------
// Actions
//-----------------------------------------------------------------
BehaviorState ChangeToEvadeInfluence(Elite::Blackboard* pBlackboard)
{
	AgarioAgent* pAgent = nullptr;
	auto dataAvailable = pBlackboard->GetData("Agent", pAgent);

	if (!pAgent)
		return Failure;

	//Get target pos
	Elite::Vector2 targetPos{};
	pBlackboard->GetData("Target", targetPos);

	//Change to evade (Target)
	pAgent->SetToEvade(targetPos);
	return Success;
}

Machine Learning

For the final AI demos of the semester, we were given a neat little introduction to machine learning, more specifically reinforcement learning (and a value-based learning example). Meaning the type of machine learning is iterative, it explores the possibilities of actions and gets rewarded for displaying the correct behavior. This reward system can be represented by matrices (reward matrix, Q-matrix, environment matrix). Below you’ll find two examples of where this can be used.

Finding Optimal Node Path

In this example, we create a little node hierarchy with certain costs (distances) of traveling from one node to the other. What we want the AI to learn and tell us, is what the optimal path between two nodes would be. As mentioned earlier, a reward matrix is what keeps track of this learning process. In this case, our matrix is an X by X sized matrix with X being the number of nodes. We want each row to represent the best possible action of travel (the highest value) for each node. How high these values are, can only be found by the iterative process of letting the AI run its course and update the matrix with every attempt he makes. Once the AI has an idea of the best action one node can take to get to the next, we can ask it to display the optimal path from one node to another by retracing back these highest values and thus ending up with our desired result.

Here we start from the green node (0) and want to know the way to the red node (7)

Displaying Desired Steering Behavior

Last but not least we return to the roots of steering behaviors. Instead of finding a path, we give the AI the ability to wander and seek and would like him to hunt surrounding food circles as a behavior. If the AI doesn’t find any food in a certain amount of time, it dies. The rule that awards this hunting behavior is by rewarding it when the AI (accidentally) hits a food circle. We can also do the opposite of rewarding and punish the AI for dying or by having near misses with the food. This way every time the velocity of the AI is in direction of the food, this will end in a reward and so is stimulated. Near misses adjust the velocity to be more anchored towards the food and dying is punished as mentioned before. After a few attempts of dying the AI can last up to 10 minutes and more of happy food hunting.

Learning phase
Refined AI

Contributors & Credits

Credits to Matthieu Delaere and Thomas Goussaert, lecturers at Howest DAE for writing the base framework files (math library, SDL window, AI topics). 

Credits to Andries Geens and Yosha Vandaele for writing and teaching the beforementioned AI topics.

Credits to Koen Samyn for base- and math files in the machine learning application.