Develop talks to some of the finest minds in motion capture to discuss where the tech is heading

Keeping in motion

Motion capture technology has come a long way in recent years. Films like Avatar are becoming old news, and the likes of Rockstar’s ground-breaking L.A. Noire has shown that even the current generation of console hardware, with its slower cycle, can continue to push boundaries.

Although the various methodologies motion capture has become are not without their constraints and difficulties, an increasingly democratised technological landscape has ushered in an era of accesibility and affordability.

There are now more pre-visualisation, more alternate capture methods and more cameras at a smaller size that can record ever subtler nuances in an actor’s performance. There has also been a move into non-optical capture, including inertial systems and surface capture, as well as the emergence of the virtual camera.

“When I first started using motion capture, there was no pre-visualisation of the motion capture data at all,” reminisces James Comstock, VP of Engineering facial animation experts Captive Motion.


“We essentially had to build the tools by hand. Now developers can buy systems that will display full body motion capture in real-time and allow them to see exactly what they are getting as the capture is happening.”

Audiomotion co-founder Mick Morris agrees that mo-cap has changed significantly over the last decade, and at an impressive pace.

“In our 13 years, the tech has improved dramatically from tiny capture volumes, low-res cameras and limited tools for post-production. Today we can utilise over 100 cameras, and capture huge volumes with multiple performers with faces and fingers.

“For Audiomotion this is a result of working very closely with Vicon here in Oxford. Collaborating closely with the engineers, helping them develop cutting edge solutions.”
Hein Beute, product manager of Xsens, whose tools were used in Sony’s Killzone 2, says that the technology has moved on to become more accessible and provide better and faster pipelines.

“Motion capture is getting more accessible to different kinds of users,” he suggests. “Ease of use has improved and because of this it also gets more integrated into production pipelines. Animators see it as a tool that allows them to achieve higher quality animations in less time.”

As it currently stands, motion capture has hit a point where cost effectiveness intersects with top quality animation. The technology has become so user friendly that Jim Richardson, president of optical tracking providers NaturalPoint, believes it is no longer just a tool for triple-A developers.

“While our technical approach is similar to that of other optical tracking companies, like Vicon or Motion Analysis, we’re marketing systems that are priced to meet an independent budget, but powerful enough to drive triple-A development,” he says.

“As affordable new technologies entered the market, forcing prices down, large and small studios alike began adjusting to the reality of motion capture as a key pipeline component. Options now exist for essentially every budget and every type of tracking application.”

Phil Elderfield of optical and motion tracking specialists Vicon, however, believes that there is still a lot for the technology to do to be more user and budget friendly.

“There is work still to be done to make motion capture the slave of the developer and the performers, rather than technology dictating a way of working. Mo-cap needs to successfully operate in an environment that reflects game narrative,” he says.


Recalling a conversation with Quantic Dream mo-cap supervisor Steve Olson, Elderfield says that being able to shoot performances on location would be much more cost effective way of animating and offer more realistic movements when the tech is fully realised.

“The mo-cap set is ideal for efficiency, but Heavy Rain is real world-based, and there is just a ton of back and forth between modelling and animation over heights of railings and so on,” Olson reportedly told Elderfield.

“It would be great to be able to set up a system like a film shoot for a handful of shots. It would save us from having to build ridiculous and expensive sets that are only needed for three seconds.”

This is one aspect that currently troubles games animators, and acts as a restraint on capturing the full realism of character movements and emotion. Comstock notes there are always issues with expertise and training when using motion capture because of the technical complexity of the process, and this needs to be overcome to provide a more seamless approach to filming.

“We are still a long way from motion capture being as easy as pointing a camcorder at someone and capturing their performance,” he states. “However, with depth sensing technologies like we have seen in devices like Kinect, I think we’re going to get there sooner rather than later.

“After that, it’s going to be less about capturing explicit markers, and more about capturing entire environments, props, clothing and all.”

Imagination Studios founder John Klepper says that one of the larger challenges is to provide actors with freedom of movement so they can express themselves, which will in turn, benefit the game.

“A big challenge yet to be adequately overcome is the ability to capture actors without the need for special suits or head mounted cameras, such that one could film in a normal way with normal clothes and simultaneously capture the actors’ motion in high resolution.”

He adds that TV and film have no such restrictions and continue to progress in the field, even introducing new animation technologies, such as complete facial motion capture and new lighting techniques.

“Future computer animation may bring the effects even closer to perceivable reality,” he says.


Motion Analysis director Bo Wright agrees that one of the biggest limitations in the industry is providing actors more freedom. He argues that too much hands-on tech can prohibit performance.

“I think the biggest limitation is finding a less intrusive way to capture facial motion. Right now the performer has to wear either markers affixed to their face or wear a head-mounted camera rig. Both of them are limiting to the production and performance.”

Working with and providing actors and directors with the tools and environments they need is a challenge the industry has taken on, albeit with mixed results. L.A. Noire and Heavy Rain are examples of how performance capture can be done in the correct way, engaging the player in real emotions, immersing them into the game world, whilst moving the industry forward.

Comstock says that the technology is in fact already here and ready for use, and consoles can easily make use of the fidelity of the animations that can be provided.

“It’s time to take really advantage of it,” he states. “Of course, with more power comes more responsibility. A bad actor is still going to be a bad actor and a terrible script is going to be terrible no matter how great the characters and animation look.”

Morris agrees that acting needs to take on a more important role with developers in future games if studios are to take advantage of current tech and create a believable experience. He also believes that hiring the right director and bringing them in early on in production is equally essential, as the chemistry between a good actor and director can provide moments too rarely seen in gaming and more often associated with film.

“Casting the right actors and director is crucial,” he says.

“The chemistry between a good director and the actors on a full performance stage can be amazing; hairs on the back of your neck stuff.

“It’s very difficult to do this if they aren’t on the stage together, recording everything at the same time. What’s the point of re-recording or dubbing when you can capture that magic between director and actor as it happens.”

Alexandre Pechev, founder of full body solving and animation outfit Ikinema also believes directors need a more integral role in the filming and animation process.

He says by merging pre-visualisation and post-processing together, which requires fast processing and solving on the CG characters in real-time, directors would be able to then work on near-final quality characters.

Whilst there is a need for some developers to recognise the importance of acting and directing, as well as the various technical limitations, there appears to be a unanimous agreement amongst the sector that the slower console cycle has not affected the progress of motion capture, despite dealing with the same hardware.

Elderfield says that the longer lifespan of the PS3, Xbox 360 and Wii has in fact helped motion capture cement a more important role in gaming as costs can go straight into IP, rather than into R&D for new systems.

“I don’t think it’s prohibiting any advancement in motion capture,” he claims. “Contrary to this, I think it has helped developers become more comfortable with what motion capture is capable of right now. As developers become more accustomed to their consoles’ power and capabilities, they are more willing to divert resources toward testing how to get more out of their engines, which in turn allows for better animations.”


Richardson echoes these sentiments, stating that despite the current generation’s system having relatively limited real-time capabilities and restricting what can be offered graphically, motion capture is tuned for numerous markets such as robotics, and is in a constant state of development.

“In that way, the gaming market will continue to benefit from the more rigorous demands of the scientific and film markets,” says Richardson. “If gaming engines and hardware progress to Avatar-like quality, in real-time, motion capture architecture will already be in place to drive those animations.”

The Creative Assembly’s mo-cap manager Peter Clapperton sums up the situation: “It may be a little known fact in the entertainment industry that one of the largest areas of R&D for motion capture technology is the medical profession, which, as you might imagine, demands absolute precision.”

This doesn’t mean that there isn’t more console manufacturers can do to help. Further advancements in real-time processing and rendering need to be made at both the software and hardware levels to bring the industry closer to film quality, and Sony, Nintendo and Microsoft all need to work with mo-cap studios to make the most of the tech.

“The key notion here is for any developers or console manufacturers to consult more and collaborate more with motion capture studios – at the very basis of any game development,” offers Klepper.

Vicon’s Elderfield says that developers also need to collaborate more in advance to find out what is possible with mo-cap and perhaps even discover tech they weren’t aware of before. He insists that whilst customer feedback often provides a list of specific functionality and detailed workflows, providing the information on the intended end goal would be more useful to animators.

“Tell us what you’re trying to achieve and let us find the technology. Ask yourself what are the important areas now, and what will they be in three years time?,” suggests Elderfield.

“It may be that a particular button or workflow can be completely superseded by a new approach or piece of technology that we have up our sleeve.”

Comstock agrees that motion capture studios should be involved in development earlier, adding: “Spend more time working with the facial capture providers to integrate them into their productions. The technology is here and ready for use, and the consoles can easily make use of the fidelity of the animations that can be provided. It’s time to take really advantage of it. “


Wright believes that developers have also become too comfortable with the progress made on current generation systems, and are reluctant to move on.

“I think we are at a plateau of standardisation where developers have settled into a commoditised world of what motion capture brings to a project and aren’t looking to jump in the deep end again. They understand how it fits into their production and budgets, and aren’t looking for a new way of doing things right now.”

He states, however, that progress in the last two years on presenting characters moving within their environments will “definitely spur on a revolution” in the sector over the next few years.

As for the future, the experts in the mo-cap sector believe progress will be made in a plethora of areas.
Clapperton says the biggest area for progression is in markerless technology.

“We’re seeing markerless facial mo-cap increasing fast at the moment, and markerless technology has the biggest potential for progression, as it’s still in its infancy,” he says.

“For most however, the costs involved are currently prohibitive compared to the more established technologies such as optical capture.”

Richardson echoes these sentiments, further stating that NaturalPoint is anticipating the eventual hybridisation of all technologies, with the best approaches converging and filling the gaps between the current democratised models of mo-cap.

“This could result in a single system that utilises both marker-based and markerless tracking via optical sensors that are combined with inertial, fibre optic, and ultrasonic sensors for added redundancy, accuracy, and tracking range,” he explains.

Ikinema’s Pechev says that for the technology to really progress, a number of improvements and current technical issues will need to be resolved.

“It feels that most of the need in the future will be on software improvements over the current solvers and retargeting tools with emphasis on real-time,” he says.
As motion capture moves forward, as well as potentially merging techniques, the systems used for the process will no doubt continue to become faster and easier to use.

“Certainly the tech will drive toward ease of use and the ability to be productive in mixed environments,” predicts Elderfield. “An idea I find interesting is the marriage of various differing capture approaches in a way that exploits the benefits of each and overcomes the limitations.”

Mo-cap studios should be wary then that through all the advances, if the technology begins to merge and progress away from its current democratised state and into a new realm of high quality and realism, there will be increased competition as companies compete more and more.

Whilst the future looks bright for the sector, game developers and console manufacturers need to work more closely with mo-cap studios to take full advantage of the tech that will already be in place, and present film quality animation that the developers and players crave, taking gaming into a new era of realism.

About MCV Staff

Check Also

BAFTA has announced the 44 finalists for Young Game Designers 2023

BAFTA has announced the names of 44 finalists for the BAFTA Young Game Designers (YGD) competition