Avatars
All posts relating to Avatars
Mobile Phone Robot
Canadian researchers trying to integrate robots into our lives have come up with a pair of dancing, crying mobile phone ‘bots. The robots, called Callo and Cally, are mobile phones with limbs.
Cally stands about 18cm high and walks, dances and mimics human behavior. Callo stands about 23cm tall, and his face, which is a cell phone display screen, shows human facial expressions when he receives text-messaged emotions. When he receives a smile emoticon, Callo stands on one leg, waves his arms and smiles. If he receives a frown, his shoulders slump and he will cry. If he gets an urgent message, or a really sad one, he’ll wave his arms frantically.
PhD student Ji-Dong Yim and Prof Chris D. Shaw from the School of Interactive Arts and Technology at Simon Fraser University in Canada have collaborated to create a robot using the combination of Nokia N82 along with components from a Bioloid kit.
Along with an the ability to move in preprogrammed patterns when receiving phone calls from different numbers, robot is also capable to detect human faces using OpenCV (Open Source Computer Vision). Robot uses wireless networking, text messaging and other interactive technologies to communicate human emotions. It’s a “simple avatar system” according to Yim.
The robot’s face, which is actually a phone screen, registers text-messaged emotions as human-like facial express.
“When you move your robot, my robot will move the same, and vice versa, so that we can share emotional feelings using ‘physically smart’ robot phones,” he says in an SFU release.
More videos here.
The Avatar Gaze
Don’t know if your friend in the virtual world is lying to you or not? Well, now avatars that can mimic our real-world eye movements can make it easier to spot if someone is telling the truth online.
Most virtual worlds, such as Second Life, are full of avatars with static or pre-programmed gazes. One way to make interactions feel more realistic is to reproduce a person’s eye movement on their avatar, said William Steptoe of University College London and colleagues.
Now research has found that real-world eye movement could make it easier to spot whether an avatar is telling the truth or not. The researchers asked 11 volunteers personal questions, such as to name their favourite book, and told them to lie in some of their answers. During the interviews, the volunteers wore eye-tracking glasses that recorded their blink rate, direction and length of gaze, and pupil dilation. Then, a second group of 27 people watched a selection of clips of avatars as they delivered the first group’s answers.
Some avatars had eye movements that mirrored those of the original volunteers, while others had no eye movement at all. The volunteers were asked whether they believed the avatars were being truthful or lying. On average, the participants were able to identify 88 per cent of truths correctly when the avatars had eye movement, but only 70 per cent without.
Spotting lies was harder, but eye movement provided 48 per cent accuracy compared with 39 per cent without. It is unclear exactly how the eye movements help. However, the eye-tracking glasses did show that people tended to hold the gaze of the interviewer for longer when telling the truth than when lying.
“Perhaps they were overcompensating,” New Scientist quoted Steptoe as saying. What’s more, their pupils dilated more when lying – something previous studies have linked with the greater cognitive load required for deception. “This is one of a small handful of cues that you can’t control,” said Steptoe.
Enhancing expressive features such as eye movement could eventually make avatar-mediated communication feel more trustworthy than online video, because only relevant visual cues need to be displayed, said Steptoe. The technology could help in business meetings held in virtual environments, or to enhance communication between people with social phobias, where face-to-face interaction can seem daunting, he added.
The results of the study will be presented at the 2010 Conference on Human Factors in Computing Systems in Atlanta, Georgia. (ANI)
Source: Science News
Who is involved?
Robots and Avatars brings together an intergenerational group of people from the education, creative industries, new media sectors, Robotics and Avatar worlds, work and behavioural psychologists, artists and key experts from future economy and future workplace. This also includes UK based and international experts (US, Europe and Korea), plus working groups and innovative panel discussions to explore the themes in depth.
Advertising/marketing > AI > Architecture > Arts > Avatars > Business development > Cultural strategy > Digital Economy > Education > e-Participation > Gaming > Health/Medical > Interactive Design > New Media > Public Engagement/Participation > Robotics > Science > STEM > TV > Virtual worlds > Webstreaming > Youth Strategy