This article's purpose is to outline how I do A.I.
Obviously different people use different methods, and other coders are naturally free to comment later on if they can improve on the way I do it (usually some bright spark has some nice improvements they can make, and I'd appreciate hearing them). But for the meantime, this is an explanation of a structuring technique that I have found useful, which forms a kind of canvas on which you can build your AI-controlled characters.
Advantages of this method
This method has some very real advantages. A simple list follows:
- It's both logical and easy to get your head around.
- It results in code that is easy to read (especially the conditions), and generally located in the same area of the event sheet.
- It can be applied to any number of objects (literally ANY number, this structuring method has the potential to run on hundreds of instances with no discernible slowdown, assuming no fastloops...)
- ...and it significantly reduces the number of fastloops you may require, meaning the above saving has an even greater impact.
- It enables you to create very complex conditions, causing actions to occur under a blend of different circumstances.
- It considerably reduces the number of detector objects you require, and enables you to perform multiple complex environment detections on all AI characters simultaneously with next to no performance hit.
Just some notes beforehand
I'm writing this from the viewpoint of an MMF2 user. Having not used TGF since version 1.06, I'm not up to speed on what it can or can't do.
I strongly recommend MMF2 users to use the List Editor for coding, as it allows you to view and alter the order of the actions (it's crucial, trust me).
I also strongly recommend that people install the Immediate If Object, or an equivalent.
Part 1: Code Structure
The system is useful because of a few key features, and this is the first.
The code is structured in a very specific way. I use groups to keep everything organised, and it saves on comments. If I were creating an evil ninja badger, for instance, I'd set his code up roughly like this:
Ninja Badger AI
--(Movement) //You may make this one part of the 'responses'
It's the architecture which gives this method its adaptability. We use what I call 'senses' to detect various facts about the NPC's surroundings. We then store those results in alterable values and strings, which we rename.
So the 'Senses' group may contain subgroups, like this:
Ninja Badger AI
--- Is Onscreen
--- Is Far Offscreen
--- Is Standing on Floor
--- Is Standing at Edge
--- Distance from Player
--- Can See Player
I usually store the results in an Alterable String ("YES" or "NO") or in an Alterable Value (for things like distances, where it can contain a range of numbers). The objective is to get a list of alterable values and strings in each NPC, like this:
Is Onscreen = "YES"
Is Far Offscreen = "NO"
Is Standing on Floor = "YES"
Is Standing at Edge = "YES"
Distance from player = 215px
Can See Player = "NO"
The 'Responses' group is then subdivided in a similar way, this time split into different responses like:
--- Chase Player
--- Throw Spear
--- Jab Spear
--- Take Cover
That kinda thing.
It's in each of these response groups that we start using the alterable strings and values (the 'senses') that we coded earlier.
Now that each NPC has a set of handy variables summarising its situation, we could do a simple AI like this:
Badger: Standing on Floor = "YES"
Badger: Far Offscreen = "NO"
Badger: Can see Player = "YES"
--- Badger: Set 'Action' to "CHARGE"
Badger: Action = "CHARGE"
--- [code for badger to charge at the player]
See how this works? We could then do a second response, for what happens if he can see the player and stuff but he's NOT standing on the floor (is falling or is jumping).
It allows us to code our NPC, taking a wide range of factors into consideration. And of course, it's easy to code and to read.
Part 2: Detectors
At different times, your NPCs may need different detectors, with different shapes and sizes, and perhaps in different positions. The age-old method for doing this was to have loads of active objects (one for each detector) all following the sprite.
An alternative way (and I don't claim to have invented this, I'm sure someone else has done it before) is to store ALL the detectors as ANIMATIONS inside just the one detector object. When you need to do different detections, you can simply do this:
--- Detector: Standing on Floor = "NO" //Our default value
--- Detector: Change animation to 'Floor Detector'
Detector: Overlaps backdrop
--- Detector: Standing on Floor = "YES"
--- Detector: Standing at Edge = "NO" //Default value
--- Detector: Change animation to 'Edge Detector'
Detector: Overlaps backdrop
--- Detector: Standing at Edge = "YES"
--- Detector: Change animation to 'Normal' //Finally switch it back
That will detect for the player standing on the floor, and standing at the edge of a platform. But note that we stored the results (e.g. 'Is Standing at Edge') inside the DETECTOR, not the sprite.
Object Selection makes this necessary, and I'll get onto why in the next section. But for this reason, coders usually make the DETECTOR function as the actual NPC, and make the visible sprite simply follow it (so actually, the sprite follows the detector, rather than the detector following the sprite).
However, there may well be times when you need to pass data from the detector(s) to their associated Sprite(s). One example is with animations. Obviously when the NPC starts running towards the player, he needs to know to change his animation to 'Run'. How do we pass the data across, from the detectors (who know the sprite's circumstances) to the sprite (who must display the animations)?
Part 3: Working with Object Selection
Possibly the greatest aspect of the Click products is object selection. For those who aren't au fait with it, MMF starts off assuming that each event refers to all the objects in the level (so 'ALWAYS -> Destroy Duck' will destroy all ducks by default). But, as you use different conditions, your conditions narrow that focus down to only those objects which match the criteria you specified ('Duck colides with bullet -> Destroy Duck' will kill only those ducks that have collided with bullets).
If more than one object meets the conditions (so if more than one duck has been shot), MMF will run a little 'action-loop', where it repeats the actions for all the objects. That's basically how MMF and TGF work. It's this process that we refer to normally as Object Selection (literally, the way MMF selects objects based on your conditions, ready to be hammered through a hyperfast ActionLoop).
One benefit of structuring your AI in the way I'm outlining here is that it works hand in hand with Object Selection, and makes the most of the kind of warp speed manifested by ActionLoops. Where possible, it uses ActionLoops instead of FastLoops, providing a considerable performance boost over FL-based systems.
The Alterable Value/String feature of MMF is one that works very well with Object Selection, and that means that when we do this:
NPC: Is Onscreen = "YES"
NPC: Can See Player = "YES"
NPC: Distance to Player > 100px
--- NPC: Look at Player
--- NPC: Set action to 'RUN'
...even if there were 500 NPCs onscreen, all of whom matched those conditions, they would all start running instantly with no slowdown.
So, getting back to our detectors, how would we pass the data between the detector and the sprite? The common way is to spread a value in all detectors and sprites, and then manually update each one in a fastloop. But for lots of objects, that'll run dog slow.
The alternative is simply this:
--- Sprite: Standing on Floor = 'Standing on Floor("DETECTOR")'
--- Sprite: Standing at Edge = 'Standing at Edge("DETECTOR")'
--- Sprite: Distance to Player = 'Distance to Player("DETECTOR")'
--- Sprite: Action = 'Action("DETECTOR")'
...aaaand so on. Just copy the alterable strings and values across in an ALWAYS event. Object Selection will natively pair each Detector to its natural Sprite, and it'll be a LOT faster than fastlooping.
Then you can set the sprite's animation based on the value of 'Action'.
Part 4: Complex Conditions, IIF you please?
Object Selection is not without limits. Using expressions in conditions can be a bit of a nightmare, and do cause a lot of people to resort to fastloops to resolve them. However, it can often be done using Immediate If Object, in conjunction with an ALWAYS event.
The IIF expression can be used to return YES or NO (or some value) into one of our alterable values based on a set of conditions. I use this a lot to make Focal View senses, where an enemy won't even consider attacking you if you're not within a certain angle (his field of view, i.e. you're standing behind him).
It's another handy way to avoid fastloops.
Part 5: When a Fastloop is Inevitable
Sometimes, a fastloop will be inevitable. Methods to perform Line-Of-Sight tests, for instance (the 'can see player' sense), are almost impossible without using a fastloop, for instance. Some detectors will need to be fired away from the character, seeking out in front, behind, above, below, up-and-across, wherever. These things require fastloops, and will hit your game's performance like a lead pipe in the perculiars.
But even in this, the architecture that's been laid out here can help. For instance, you can arrange the senses so that one sense doesn't even run if another sense hasn't triggered.
It's best to explain with an example:
I always use two senses to act as an NPC's 'eyes':
1) Focal View (or Field of View)
2) Line of Sight
The first is used to determine whether the player is in front of the NPC, or behind it. There should only be a certain range that the NPC can see in, and this will test if the player sits within that range. It doesn't take obstacles into account, it's based purely on the position of the player relative to the position of the NPC. This requires no fastlooping.
The second is used to determine if any obstacles are blocking the path/line-of-sight between the player and the NPC. This would allow the player to hide behind walls or crates. This process DOES require a fastloop.
However, we can use the earlier senses to make sure that the LOS process ONLY runs for those few NPCs who really need it. So for instance, an enemy who's far offscreen couldn't possibly see the player. So we add a condition to keep him from LOS-ing. And any NPCs with their backs turned to the player also don't need to check for obstacles, as there's no chance they could ever see him. So we add another condition to discount them too.
Do you see how - if you have a sense like Line of Sight which requires fastloops to run - you can reduce the damage to your framerate by making sure it runs for as few objects as possible, and as rarely as possible. You can also add a restriction using Function Eggtimer which only runs the LOS test every few frames. That can cause a 'diluted' slowdown, which is less noticable.
All these techniques can be combined to create powerful, fast AI routines, which more accurately match what you wanted to achieve at the outset.
So I hope this helps someone, and I'll see if I can get any examples up showing how useful it can be in action.