Creating better digital denizens

We are incredibly sensitive to human movement and appearance, which makes it a big challenge to create believable computerised…


We are incredibly sensitive to human movement and appearance, which makes it a big challenge to create believable computerised crowds, but researchers at Trinity are working on improving that

AS YOU SETTLE down to watch the seasonal feast of movies on TV this Christmas – or maybe as you get stuck into computer games that found their way under the tree – take a closer look at any virtual humans that crop up.

Are they realistic? Can you get a sense of the emotion they are meant to portray? Or are they jarring and weird with gestures and tones that somehow don’t quite add up?

Getting those computer-generated avatars to act in engaging and more “human” ways is trickier than it looks. But researchers at Trinity College Dublin are delving into how we perceive graphical characters and coming up with insights to create more socially realistic virtual humans without hogging too much computer processing expense.

READ MORE

“We try to work out what is it about the appearance and the behaviours and voices of virtual humans – be they in crowds or groups or on their own – that makes them more appealing and believable,” explains Carol O’Sullivan, professor of visual computing at Trinity College Dublin.

One of the projects her team works on is Metropolis, which looks to create a realistic simulation of a “virtual Dublin” based on research in computer graphics, engineering and cognitive neuroscience.

Getting the crowds right in this computerised cityscape is important, according to O’Sullivan, who with collaborator Prof Fiona Newell at the Trinity College Insitute of Neuroscience has been looking at how we perceive groups of virtual people.

“There’s very little research about how we preceive the crowd en masse,” says O’Sullivan. “Do you perceive the emotion of a crowd by looking through each member of the crowd individually or is there a collective impression that you get from the crowd?”

The team has been trying to work out smarter ways of making simulated crowds look more varied without the expense of creating a model for each individual, and they are finding that altering the upper bodies and faces on common templates is a good way to get more bang for your buck.

“We used eye tracking to see how people view parts of the body when they are looking at the crowd and we found they they focused almost excusively on the body and the face,” says O’Sullivan. “When we changed the lower body it had no effect at all.”

The disguises need not be too elaborate either if you target them to that eye-catching upper portion, she adds – putting on a beard, hat or glasses or changing the skin shade, hair colour or texture of their top can make a difference relatively cheaply in real time.

Linking sounds with movements is also crucial, adds O’Sullivan, and Prof Henry Rice from Trinity’s school of engineering is figuring out how to match sounds like the thud of footsteps with simulated movements, which is not an easy task with large crowds.

Researchers from the team also sat together and attached markers to themselves so they could capture their movements and voices on camera as they conversed.

That built up a large corpus of data to tease out the subtle synchronies between gestures and sounds that our brains register without us even thinking about it, but which can come across as awkward if they are broken or desynchronised.

The Trinity team is also collaborating with Jessica Hodgins, director of Disney Research Pittsburgh and professor at Carnegie Mellon University – they have been filming actors playing out short vignettes, such as a couple having an argument over money or a person getting frustrated with a crashed computer, then looking at how we engage with simulated versions of the characters.

Changing the synchrony even slightly can alter how viewers perceive the avatars and what they are portraying, explains O’Sullivan, who notes that understanding these nuances could help drive a story in particular ways.

And overall the Trinity group’s research, which is funded by Science Foundation Ireland and supported by various companies, hopes to inform not only how to make characters and environments more socially realistic in movies and games, but also how best to use avatars in online training and when encouraging groups such as older people to engage with technology, she adds.