Having completed the structure of my latest level (a three-part open-air complex design) I’m ready to really sit down and think hard about the A.I. This has probably been the most difficult part of designing this game due to the constraints of my tools and the requirement that it should not significantly slow down the runtime which is already pushing the limits of multimedia fusion.
I handle collisions and interactions with the environment via a 256-color 640×480 bitmap wherein different colors signify different types of terrain: windows, walls, half-height walls, etc. This works quite well for testing if a character is running into a wall or off a cliff, but the A.I. will need to sort of generalize outward and reason about the global structure of the map to some extent. Sampling colors at various points seems insufficient.
It would be pretty easy to write this sort of A.I. if the levels were all open fields. It’s when the guard hears the player on the other side of a wall, or sees him duck around a corner that things get a little tricky. Some sort of pathfinding is indicated here, but I can’t do full-blown pathfinding without grinding the engine to a halt. A reasonable approximation is needed.
The A.I. as it stands right now is servicable– it will run after you, shoots at you and will mostly behave as expected- but it isn’t smart enough. It’ll work itself into corners or sometimes fail to notice the player in situations where it should be quite able. Part of this comes from the final stages of tweaking variables like footstep and gunshot noises or shadow visibilities, which I intend to get to during this overhaul.