Skip to main content

AI Controller Update - Week Ending 6/27

Now, this was a week in which some really important work was done.
Last week, I wrote about the roadblocks I was facing with writing my own game engine – guaranteeing a smooth game loop, constant frame rate, and float precision.
While it is quite easy to build a functional game loop, it is harder to build one that does not break under stress. Somehow my windows framework wasn’t sending regular updates to my main loop, and I spent quite a while scratching my head trying to figure it out.
However, time is precious and it was running out. I had to make a decision, and make it quick. I chose to jump into Unreal and port over all my code into Unreal 4.16.

Jumping to Unreal

I wanted to build a follower behavior, but I also wanted to build it right. So, I made sure to have a component-based architecture from the get-go, and made sure to have the steering behaviors as Actor Components, so that they could be attached to any actor and be reused.
The Actor Steering component is a C++ component which has its functions exposed to blueprints. This is done so that scripters can take advantage of this and integrate it into their scripting workflow.
The code is the same as demonstrated in earlier posts. The only difference is using Unreal’s FVector class instead of our own custom vector, and getting the location, velocity, etc from the parent actor. Here is what Arrive looks like in Unreal (look at last week’s post to see how it looked like in the custom engine):

FVector UActorSteeringComponent::Arrive(const FVector& Target)
       AActor* Owner = GetOwner();

       FVector ToTarget = Target - Owner->GetActorLocation();
       float Distance = ToTarget.Size();

       if (Distance > 0)
              float Speed = Distance / DecelerationCoefficient;
              Speed = FMath::Min(Speed, mpMovementComponent->GetMaxSpeed());

              FVector DesiredVelocity = ToTarget / Distance * Speed;
              return DesiredVelocity - Owner->GetVelocity();

       return FVector(0, 0, 0);

New behaviors and improvements to old ones

The benefits of jumping to Unreal were immediate. I was able to bang out 3 new behaviors:


Given a target Actor, the source Actor will predict the future position of the target (given a lookahead time), and will Seek towards that.

FVector UActorSteeringComponent::Pursuit(const AActor* TargetActor)
       AActor* Owner = GetOwner();
       FVector TargetActorLocation = TargetActor->GetActorLocation();
       FVector TargetActorVelocity = TargetActor->GetVelocity();

       FVector ToTargetActor = TargetActorLocation - Owner->GetActorLocation();
       float LookAheadTime = (ToTargetActor.Size() / mpMovementComponent->GetMaxSpeed() + TargetActorVelocity.Size()) / LookAheadTimeModifier;
       return Seek(TargetActorLocation + TargetActorVelocity * LookAheadTime);


Similar to Pursuit, but the source Actor will Flee from the predicted future position of the target.

FVector UActorSteeringComponent::Evade(const AActor* TargetActor, float TriggerDistance)
       AActor* Owner = GetOwner();
       FVector TargetActorLocation = TargetActor->GetActorLocation();
       FVector TargetActorVelocity = TargetActor->GetVelocity();

       FVector ToTargetActor = TargetActorLocation - Owner->GetActorLocation();

       if (TriggerDistance < 0 || ToTargetActor.SizeSquared() <= TriggerDistance * TriggerDistance)
              float LookAheadTime = (ToTargetActor.Size() / mpMovementComponent->GetMaxSpeed() + TargetActorVelocity.Size()) / LookAheadTimeModifier;
              return Flee(TargetActorLocation + TargetActorVelocity * LookAheadTime);

       return FVector(0, 0, 0);


This simulates a random walk. The problem with true randomness is that it can be extremely unpredictable and unrealistic. The solution to this is to add small variations of force to force the Actor off its path.

FVector UActorSteeringComponent::Wander(float DeltaTime)
       float Jitter = WanderJitter * DeltaTime;
       mWanderTarget += FVector(RandomClamped() * Jitter, RandomClamped() * Jitter, 0);
       mWanderTarget *= WanderRadius;
       FVector Target = mWanderTarget + FVector(WanderDistance, 0, 0);
       Target = GetOwner()->GetTransform().TransformPosition(Target);
       FVector ToPos = Target - GetOwner()->GetActorLocation();
       FVector DesiredVelocity = ToPos * WanderMaxSpeed;
       return DesiredVelocity - GetOwner()->GetVelocity();


Since the core functionality is being done in code, we expose the interfaces and adjustable variables to the Unreal Editor using decorators:

UPROPERTY(EditAnywhere, BlueprintReadWrite, Category = "Steering")
float DecelerationCoefficient;

UFUNCTION(BlueprintCallable, Category = "Steering")
FVector Arrive(const FVector& Target);

And now not only can we configure all of these variables in the editor, but we can build any set of complex behaviors from these simple behaviors, from script.

We'll probably see obstacle avoidance next week.

Find the source code here:


Popular posts from this blog

AI Controller Update - Week ending 7/4

This week, I worked on Obstacle Avoidance and a slight hint of navigation. Obstacle Avoidance Though the concept is simple enough, the execution is quite tricky. The first part is to detect obstacles in your area. We define a lookahead, which can vary based on the object’s current velocity. I’ve left it at a fixed value for now. Then we raycast forwards to find the first obstacle in our path. Next, we compute the normal from the point of impact. We steer our object away from the point in the direction of the normal. The code looks like this: bool bHit = GetWorld ()-> LineTraceSingleByChannel ( Hit , StartLocation , EndLocation , Channel , QueryParams , ResponseParam );        if ( bHit )        {               FVector PenetratedAlongHit = Hit . ImpactPoint - EndLocation ;               FVector PenetratedAlongNormal = PenetratedAlongHit . ProjectOnToNormal ( Hit . ImpactNormal );               float PenetrationDepth = PenetratedAlongNormal . Size ();

AI Controller Update - Week ending 7/18

This was another landmark week. I was finally able to get navigation meshes working with all the behaviors I’d implemented so far. To see how important this is, let’s consider an example: The follower (the smaller model) needs to get to the target position (the other character in this case). However, there is an obstacle in between. The steering algorithm doesn’t know anything about obstacles. So it will continually push the follower towards the actor, and into the obstacle. With a navmesh, we can tell the steering algorithms to follow a sequence of points in order to reach the target actor. Now, calculating this every frame is very expensive. So what do we do here? First, we raycast to the target position to see whether the follower can reach there uninterrupted. If it can, fantastic. We just ask the follower to seek to the target position. If there isn’t a direct path, we request the navmesh to provide us with a set of points that we store, and then seek to one by one. Wh

AI Controller - Final update

I got the sample level up and running - and I'll let my video do the rest of the talking! :)