[This feature was published in the June 2013 edition of Develop magazine, which is available through your browser and on iPad.]
Hunkered down behind a pillbox, you reload as bullets whip passed overhead. The enemy is closing in. A voice calls out to you. You have to move, now. You leap to your feet, only to find Private Porkins blocking your path – he’s caught himself on a verge again and, worst still, he’s broken the illusion.
Artificial intelligence is one of the chief aspects that makes a game believable. It governs pathfinding, character behaviour and a wide selection of other operations behind the scenes.
The burden on AI, and developers, to maintain a game’s believability, is a heavy one. The example above is just one of many goofball moments that players will recognise. But while it’s easy to criticise games’ AI, tools developers, such as Autodesk principal engineer Guillaume Aldebert, argue that significant advancements continue to be made in this field.
The worlds that developers envision are becoming more complex and detailed, and as games become present on a plethora of new platforms, each with varying processing capabilities, the challenge for AI creators is to cater for an increasingly unpredictable array of needs.
And, of course, with the coming of new games hardware, expectations are running high for games to achieve new levels of immersion.
To start with however, let’s look at where AI is today. By Aldebert’s account, it’s in a good place.
“BioShock Infinite is a milestone in what can be achieved with smart AI. Elizabeth embodies how an AI-driven companion can tremendously enrich the game experience, without hindering it or burdening players,” he asserts, praising Irrational Games’ formative current-gen title, which made use of yet to be specified Autodesk tools to give its personable heroine a mind of her own.
BioShock Infinite also impressed PathEngine owner Thomas Young, who was pleased to see that the development team and 2K Games made a “big deal” of how key AI was to the game, and considered it to be a marketable product feature.
“Bad AI can definitely break the immersion, and I think that just ‘not doing stupid stuff’ is a really important goal for game AI in general,” Young acknowledges. “It’s not so clear how important this actually is to players in general, though. If we look at something like Skyrim for example, where there was a whole lot of criticism about the AI, but the issues in this area didn’t stop the game going on to sell extremely well.
“But the technology is certainly there for implementing more robust interactions with the AI, and I think that players’ minimum expectations will inevitably increase as they see more and more titles that do this well.”
TECHNOLOGY WILL SAVE US
PathEngine is one of the companies working to improve games through its AI systems technology. The defining factor behind its approach to pathfinding technology is to make it robust enough to alleviate the time constraints that frequently impact production.
“Time constraints are definitely significant, as you can get some behaviours working for a limited set of circumstances quite quickly, but it takes a lot longer to ensure the behaviours work in all of situations, with all possible player actions and so on,” says Young.
“And similarly, on the technology side, you can get basic tech up and working very quickly for a limited set of scenarios, but it’s much harder to make your technology both robust and general for all situations and all possible player interactions.”
To solve the never-ending issue of time constraints, Autodesk’s Aldebert says it’s about providing a fast, iterative data production workflow.
“Autodesk Gameware Navigation provides fast, iterative, multi-sector navmesh generation. Being able to visualise and test your navmesh as fast as possible enables the level designer to iterate more, and to focus on his design instead of technical issues,” he says, adding: “We have seen the rise of systems such as behaviour trees designed to help iterate faster. The faster you can adapt your AI behaviour, the more tuning you can do to your gameplay. Even for low-level AI, it is important to be able to debug and understand what the pathfinding is doing.”
Epic Games, too, sees technology that provides greater efficiency as the starting point for AI’s progression.
“We’re trying to help address this issue in a couple of ways with Unreal Engine 4,” Epic’s senior engine programmer Steven Polge tells Develop.
“By providing robust underlying systems, such as the navigation mesh, we enable developers to focus on creating the custom AI behaviours for their games earlier in the development cycle.
“With the Blueprint system and other tools, we’re focusing on improving designers’ workflow and making it easier for them to iterate on NPC behaviour throughout the development of their games.”
And that’s not all Epic have added to UE4 as far as AI is concerned: “We’ve improved the performance of pathfinding and pathfollowing, and made it fully asynchronous, so the work can be done on a separate thread, reducing the performance impact of AI. We also support asynchronous dynamic navigation mesh modification with very high performance, allowing the AI to effectively navigate through a changing environment without a significant performance hit. Our navigation mesh also has easily extensible support for adding special links and modifiers for complex navigation.”
AI developer Ben Sunshine-Hill of Havok says facing the challenge to maintain immersion in games comes down to “defensive coding”. “To some degree that means contingency plans for your contingency plans. But it also requires a very careful and analytical approach to development,” explains Sunshine-Hill.
“There’s a temptation to concentrate on the ‘common case’ during AI design, with behaviours that look their best when the player’s playing along and everything’s going right, and to add special case handling later, as testing reveals specific weaknesses in a character’s AI.
“But this approach becomes unsustainable as AI becomes more complex. Special-case fixes tend to breed new special-case failures. Instead, there needs to be a constant focus on elegance and robustness; on techniques that are provably applicable to every corner of their problem space.”
Havok’s own AI technology has been used in Guild Wars 2, Halo 4 and the recent Devil May Cry reboot, and these titles exhibit some of latest advancements in the field.
The most prominent change, Sunshine-Hill says, has been the rise of animation-driven locomotion (ADL), where a character’s velocity in the world is entirely determined by the character animation system.
“This is a huge step forward for animation quality, and it’s been a long time coming in the triple-A space, but lately it’s being leveraged by smaller teams as well. ADL is a challenge for conventional pathfollowing and collision avoidance systems, which aren’t used to having their motion decisions reinterpreted, smoothed or countermanded entirely. So the challenge in the pathfinding space is to design systems that stay robust and effective under these conditions,” he explains.
Pathfinding, or the process of giving game characters the logic to successful navigate the gameworld, remains a chief priority of AI software makers. However, as AI specialist and CEO of Xaitment, Mike Walsh, states, the focus is shifting to character behaviour.
“Gamers and games developers want characters to have multiple behaviours and react in more realistic ways. Players want characters that can demonstrate different behaviours depending on the situation.”
On and interactive level, there’s much enjoyment to be had from the seemingly endless collection of semi-connected tasks and quests in the two most recent Elder Scrolls games. But where the games fall down is giving characters the intelligence and range to treat situations in context.
For instance, if a player engages an NPC in conversation, they expect to see more going on than a handful of stock gestures or head movements for every encounter. Characters need to loosen up, and communicate with their bodies as well as their lips.
Contrary to what some think, achieving this is not the sole responsibility of engineers, argues Xaitment’s Walsh.
“We’ve seen a shift from AI being an engineering-centric problem towards one that includes design and often art as well. Designers and artists need tools that they can work with to help bring their visions to life, as opposed to relying on engineering to hard code something in the engine,” he says.
“The next generation of characters will have to demonstrate multiple behaviours in complex environments. Typically development teams have handled character behaviours with scripting. The level of character sophistication often depends on how well the development team can structure and maintain all of the scripts. Games are becoming much more complex, and thus developers are looking for an alternative system to handling complex characters in complex environment.”
Speaking of next-gen, on the surface it would appear AI development stands to gain much from the increased processing power of the PS4 and Xbox 360 successor.
“I think the additional CPU power and memory will allow ‘bigger’ games: a larger number of NPCs with more natural interactions such as in crowds, larger scale environments with denser and richer annotations to help with the spatial reasoning or uncommon navigation schemes,” predicts Autodesk’s Aldebert.
However, besides hardware power, the quality of the character behaviour will be strongly dependent on the software tools, he adds.
Xaitment’s Walsh agrees: “Beefier machines don’t create better AI by themselves. They simply allow for more complexity. It’s the designers, engineers and artists in conjunction with better tools that are needed to create better AI and better games.”
Essentially, says PathEngine’s Young, there are some areas where better hardware can help with AI, but, he says that, in many cases hardware capabilities are not what’s holding AI implementation back. Improved graphics and animation fidelity actually increase the cost of implementing better character behaviours, he warns, and believes its issues like this that are making top-tier games even harder to tune.
“I think there’s a problem with triple-A titles being fundamentally hard to scale, which isn’t going to be solved by the next-gen hardware improvements, and in order for us to see really big improvements in AI and character behaviour it may be necessary to find ways around these kinds of scalability issues.
“Moving towards more procedural generation is something that can be interesting from this point of view, and automating content generation processes is definitely a good idea,” says Young.
What’s certain is the range of tasks AI is expected to undertake is growing. Havok’s Sunshine-Hill says AI will need to respond to a wider selection of natural inputs, such as voice commands.
“New hardware supports new modalities of interaction, and those place new demands on AI,” says Sunshine-Hill.
“Speech recognition is a good recent example of this. Directing AI-driven squad mates with spoken commands requires semantic and spatial reasoning techniques we’ve never needed before. And in this next generation, as game worlds become more physically dynamic, we’ll need AI which can instantly comprehend and react to changes.”
So a time when entire portions of a game can be controlled through voice commands, or even conversation – such as Mass Effect, LA Noire or Sony’s somewhat fanciful Eyedentify tech demo from years back – may be on the horizon.
99 PROBLEMS, BUT A BRAIN AIN’T ONE
Before all that though, the experts say there are plenty of challenges that still need to be addressed. Complex and increasingly reactive animation systems pose a lot of constraints, Autodesk’s Aldebert says.
Foot planting, inertia, jumps, ladders, or unique, irregular character moves can all create constraints that make it more difficult to plan the movement of NPCs, he says, so “we always need to anticipate how game design and level design might evolve, in order to update the tools we use to meet the expectations of developers, who are trying to meet the rising expectations of players.”
For Sunshine-Hill, providing Havok AI tools means not leaving developers with excessive amounts of customisation to do before they can use a tool.
“As a middleware developer specifically, the biggest challenge I face is providing AI tools which don’t require too much tuning or customisation by the end user,” he explains. “It’s easy to expose a hundred knobs to turn, but they’re useless if it isn’t immediately clear which ones to tweak in order to accomplish a particular effect, and worse than useless if some of them need to be tweaked before things will work at all. I’m constantly looking for ways to replace ‘tune this variable until you find the right answer’ with ‘the system automatically solves for the right answer’.”
As well as more efficient tools, Epic’s Polge says the key challenge is also making the AI’s actions and thought process clearly discernible to the player, and effectively using AI to provide interesting challenges and enrich the game experience.
There are many tests ahead for this field of development, but it’s no exaggeration to say that the best is yet to come. Getting there will require collaboration as much as technology, as Xaitment’s Walsh concludes: “AI is an interdisciplinary problem because it involves design, art and engineering. The really cool behaviours need to be visualised to demonstrate they are occurring at the right time. AI designers, engineers and artists need to work closely with each other to produce seamless game AI.
“Ultimately, designers and artists need increased creative control, but they are usually reliant upon on engineering to get content into the game engine. The trick is to enable the design team by providing them with the tools necessary to create any character behaviours, whether for robots, pets or aliens. Tools that would provide a visual representation of the problem, allow design and art to rapidly iterate, re-use assets and also debug. This would enable designers and artists to improve AI and, more specifically, character behaviours.”
SMARTS ON YOUR SMARTPHONE
Though triple-A and the demands of high-end games have got the attention of AI makers, they’ve not forgotten about the rapidly expanding mobile market.
The challenge, then, is to provide tools that meet these two markets, which are converging in some areas, but continue to be worlds apart in others.
“Mobile platforms are increasingly powerful and common, so in theory, we could deliver console type gameplay experience, the ones requiring advanced AI,” predicts Autodesk’s Guillaume Aldebert.
“The input system (touch screens) and smartphone player demographic drives the gameplay experiences toward arcade games – Angry Birds, Cut the Rope, Temple Run, or social competitive/collaborative game design. But as time passes, the evolution of mobile gaming will begin to require advanced AI systems to deliver increasingly elaborate experiences.
“For instance, your game design might be different from a traditional console title, but as soon as you need an autonomous navigating entity, you will require an advanced AI system. And with the production values and player expectation continuously increasing, mobile developers will want to use tools that are accessible.”