Oct 07, 2006
Spatial updating in virtual reality: the sufficiency of visual information.
Psychol Res. 2006 Sep 23;
Authors: Riecke BE, Cunningham DW, Bülthoff HH
Robust and effortless spatial orientation critically relies on "automatic and obligatory spatial updating", a largely automatized and reflex-like process that transforms our mental egocentric representation of the immediate surroundings during ego-motions. A rapid pointing paradigm was used to assess automatic/obligatory spatial updating after visually displayed upright rotations with or without concomitant physical rotations using a motion platform. Visual stimuli displaying a natural, subject-known scene proved sufficient for enabling automatic and obligatory spatial updating, irrespective of concurrent physical motions. This challenges the prevailing notion that visual cues alone are insufficient for enabling such spatial updating of rotations, and that vestibular/proprioceptive cues are both required and sufficient. Displaying optic flow devoid of landmarks during the motion and pointing phase was insufficient for enabling automatic spatial updating, but could not be entirely ignored either. Interestingly, additional physical motion cues hardly improved performance, and were insufficient for affording automatic spatial updating. The results are discussed in the context of the mental transformation hypothesis and the sensorimotor interference hypothesis, which associates difficulties in imagined perspective switches to interference between the sensorimotor and cognitive (to-be-imagined) perspective.
Oct 05, 2006
Scientific American online reports about a physics experiment conducted by Eugene Polzik and his colleagues at the Niels Bohr Institute in Copenhagen, which looks like the "Beam-me-up, Scotty" technology of Star Trek:
At long last researchers have teleported the information stored in a beam of light into a cloud of atoms, which is about as close to getting beamed up by Scotty as we're likely to come in the foreseeable future. More practically, the demonstration is key to eventually harnessing quantum effects for hyperpowerful computing or ultrasecure encryption systems. Quantum computers or cryptography networks would take advantage of entanglement, in which two distant particles share a complementary quantum state. In some conceptions of these devices, quantum states that act as units of information would have to be transferred from one group of atoms to another in the form of light. Because measuring any quantum state destroys it, that information cannot simply be measured and copied. Researchers have long known that this obstacle can be finessed by a process called teleportation, but they had only demonstrated this method between light beams or between atoms...
Read the full story
Teliris, a company that develops telepresence solutions, has announced its 4th generation offering. From the press release:
The VirtuaLive(TM) enhanced technology provides the most natural and intimate virtual meeting environment on the market, and is available in a broad set of room offerings designed to meet the specific needs of its customers.
Building on Teliris' third generation GlobalTable(TM) telepresence solutions, VirtuaLive(TM) provides enhanced quality video and broadband audio, realistically replicating an in-person meeting experience by capturing and transmitting the most subtle visual gestures and auditory cues.
"All future Teliris solutions will fall under the VirtuaLive(TM) umbrella of offerings," said Marc Trachtenberg, Teliris CEO. "With such an advanced technology platform and range of solutions, companies can select the immersive experience that best fits their business environment and goals."
VirtuaLive's(TM) next generation of Virtual Vectoring(TM) is at the center of the new offerings. It provides users with unparalleled eye-to-eye contact from site-to-site in multipoint meetings with various numbers of participants within each room. No other vendor offering can match the natural experience created by advanced technology in such diverse environments.
Nick Yee and colleagues at Stanford University have investigated whether social behavior and norms in virtual environments are comparable to those in the physical world. To this end, they collected data from avatars in Second Life, in order to explore whether social norms of gender, interpersonal distance (IPD), and eye gaze transfer into virtual environments even though the modality of movement is entirely different.
"Results showed that established findings of IPD and eye gaze transfer into virtual environments: 1) Malemale dyads have larger IPDs than female-female dyads, 2) male-male dyads maintain less eye contact than female-female dyads, and 3) decreases in IPD are compensated with gaze avoidance"
According to Yee and coll., these findings suggest that social interactions in online virtual environments are governed by the same social norms as social interactions in the physical world.
Yee, N., Bailenson, J.N. & Urbanek, M. (2006). The unbearable likeness of being digital: The persistence of nonverbal social norms in online virtual environments. Cyberspace and Behaviour, In Press.
Sep 17, 2006
Delwin Clarke, P. Robert Duimering.
Computers in Entertainment (CIE), Volume 4 , Issue 3 (July 2006)
Very little is known about computer gamers' playing experience. Most social scientific research has treated gaming as an undifferentiated activity associated with various factors outside the gaming context. This article considers computer games as behavior settings worthy of social scientific investigation in their own right and contributes to a better understanding of computer gaming as a complex, context-dependent, goal-directed activity. The results of an exploratory interview-based study of computer gaming within the "first-person shooter" (FPS) game genre are reported. FPS gaming is a fast-paced form of goal-directed activity that takes place in complex, dynamic behavioral environments where players must quickly make sense of changes in their immediate situation and respond with appropriate actions. Gamers' perceptions and evaluations of various aspects of the FPS gaming situation are documented, including positive and negative aspects of game interfaces, map environments, weapons, computer-generated game characters (bots), multiplayer gaming on local area networks (LANs) or the internet, and single player gaming. The results provide insights into the structure of gamers' mental models of the FPS genre by identifying salient categories of their FPS gaming experience. It is proposed that aspects of FPS games most salient to gamers were those perceived to be most behaviorally relevant to goal attainment, and that the evaluation of various situational stimuli depended on the extent to which they were perceived either to support or to hinder goal attainment. Implications for the design of FPS games that players experience as challenging, interesting, and fun are discussed.
Sep 16, 2006
The TelePresence World event series seeks to introduce telepresence technologies and explore their use in industry, government, education, medical and other fields.
Event delegates will have the opportunity to debate and discuss the revolutionary technological developments that have brought telepresence from the realm of science fiction to the reality of everyday business.
To demonstrate the power of telepresence and unified communications to bridge distance and bring people together, TelePresence World 2007 will include a concurrent exhibition to be held in the university's 25,000-square-foot, state-of-the-art pavilion.
Aug 02, 2006
Thanks to Giuseppe Riva (reporting from Siggraph 2006)
Urban Tapestries aims to enable people to become authors of the environment around them – Mass Observation for the 21st Century. Like the founders of Mass Observation in the 1930s, we are interested creating opportunities for an "anthropology of ourselves" – adopting and adapting new and emerging technologies for creating and sharing everyday knowledge and experience; building up organic, collective memories that trace and embellish different kinds of relationships across places, time and communities.
It is part of an ongoing research programme of experiments with local groups and communities called Social Tapestries.
Jul 31, 2006
Phoenix, Arizona, 6-7 September 2006
From the conference website
The Health Sciences, perhaps more than any other discipline, is dependent of images / video and the analysis of these images to be successful. This is true in biomedical research, health education and of course in clinical care. These images are increasingly digital and improving in resolution by orders of magnitude. This simultaneously allows more flexibility with the analysis of the image and much better analysis. The data resources are logically growing exponentially there is a growing need to share and compare images.
The ability to effectively use and share these resources is often a local issue with individual solutions being developed. There are exceptions and when they are discovered they are often treated as significant success stories. For example, the SUMMIT Project at Stanford University has created a significant set of stereoscopic and haptically enabled Digital Anatomy resources. These resources and the expertise at Stanford will be used across the nation and now internationally to teach anatomy courses.
As the health sciences continue to become more specialized and the educational resources become more difficult to locate, advanced networking that allows secure, reliable access to expertise and high quality resources is critical. The ability to create virtual organizations and collaborate with and among students, professors, researchers, and clinicians irrespective of location is of increasing value.
At the same time, large-capacity research networks (e.g., Internet2 and peered networks, GLORIAD) and high-quality video applications (e.g., DV-over-IP, developed by the WIDE Project and stereoscopic HD video-over-IP realized by GIST are making such virtual collaboration technically possible and financially affordable. However, the absence of (human) networking opportunities has hampered the development of sustainable testbeds and slowed the rate of innovation.
This workshop will focus on our ability to effectively use and manipulate image / video resources irrespective of their location. It will also showcase many emerging technologies relevant to the medical field. Most importantly, it will provide an opportunity for those in the medical community to learn alongside experts in the area of video technologies and large capacity networking about the challenges ahead and to begin a discussion about how those challenges can be met.
Jul 27, 2006
Jul 05, 2006
Robert J. Moore , Nicolas Ducheneaut and Eric Nickell.
To date the most popular and sophisticated types of virtual worlds can be found in the area of video gaming, especially in the genre of Massively Multiplayer Online Role Playing Games (MMORPG). Game developers have made great strides in achieving game worlds that look and feel increasingly realistic. However, despite these achievements in the visual realism of virtual game worlds, they are much less sophisticated when it comes to modeling face-to-face interaction. In face-to-face, ordinary social activities are "accountable," that is, people use a variety of kinds of observational information about what others are doing in order to make sense of others' actions and to tightly coordinate their own actions with others. Such information includes: (1) the real-time unfolding of turns-at-talk; (2) the observability of embodied activities; and (3) the direction of eye gaze for the purpose of gesturing. But despite the fact that today's games provide virtual bodies, or "avatars," for players to control, these avatars display much less information about players' current state than real bodies do. In this paper, we discuss the impact of the lack of each type of information on players' ability to tightly coordinate their activities and offer guidelines for improving coordination and, ultimately, the players' social experience.
Jun 04, 2006
Digital Chameleons: Automatic Assimilation of Nonverbal Gestures in Immersive Virtual Environments
Jeremy N. Bailenson and Nick Yee
Psychological Science, 16 (10)
Previous research demonstrated social influence resulting from mimicry (the chameleon effect); a confederate who mimicked participants was more highly regarded than a confederate who did not, despite the fact that participants did not explicitly notice the mimicry. In the current study, participants interacted with an embodied artificial intelligence agent in immersive virtual reality. The agent either mimicked a participant’s head movements at a 4-s delay or utilized prerecorded movements of another participant as it verbally presented an argument. Mimicking agents were more persuasive and received more positive trait ratings than nonmimickers, despite participants’ inability to explicitly detect the mimicry. These data are uniquely powerful because they demonstrate the ability to use automatic, indiscriminate mimicking (i.e., a computer algorithm blindly applied to allmovements) to gain social influence. Furthermore, this is the first study to demonstrate social influence effects with a nonhuman, nonverbal mimicker.
Download the full paper here
Apr 07, 2006
Virtual Reality Volume 9, Number 4; Date: April 2006; Pages: 226 - 233
Much effort has gone into exploring the concept of presence in virtual environments. One of the reasons for this is the possible link between presence and performance, which has also received a fair amount of attention. However, the performance side of this equation has been largely ignored.
Via Brain Ethics
In the most recent issue of Nature (March 30) Christof Koch and Klaus Hepp challenge the hypothesis that human consciousness invokes quantum principles:
We challenge those who call upon consciousness to carry the burden of the measurement process in quantum mechanics with the following thought experiment. Visual psychology has caught up with magicians and has devised numerous techniques for making things disappear. For instance, if one eye of a subject receives a stream of highly salient images, a constant image projected into the other eye is only seen infrequently. Such perceptual suppression can be exploited to study whether onsciousness is strictly necessary to the collapse of the wave function. Say an observer is looking at a superimposed quantum system, such as Schrödinger’s box with the live and dead cat, with one eye while his other eye sees a succession of faces. Under the appropriate circumstances, the subject is only conscious of the rapidly changing faces, while the cat in the box remains invisible to him. What happens to the cat? The conventional prediction would be that as soon as the photons from this quantum system encounter a classical object, such as the retina of the observer, quantum superposition is lost and the cat is either dead or alive.This is true no matter whether the observer consciously saw the cat in the box or not. If, however, consciousness is truly necessary to resolve the measurement problem, the animal’s fate would remain undecided until that point in time when the cat in the box becomes perceptually dominant to the observer. This seems unlikely but could, at least in principle, be empirically verified. The empirical demonstration of slowly decoherent and controllable quantum bits in neurons connected by electrical or chemical synapses, or the discovery of an efficient quantum algorithm for computations performed by the brain, would do much to bring these speculations from the ‘far-out’ to the mere ‘very unlikely’. Until such progress has been made, there is little reason to appeal to quantum mechanics to explain higher brain functions, including consciousness.
Mar 28, 2006
German researchers are developing the concept of virtual humans to replace these answering machines. These virtual humans could be used as ticket sellers or as teachers
These virtual humans, which will interact with you through speech and gestures could be used as ticket sellers, but also as teachers for students taking e-learning courses.
Several research institutions are working on this concept including the Fraunhofer Institute for Computer Graphics (IGD).
"The idea behind the virtual character is to design the human-computer interface as naturally as possible", explains Christian Knöpfle, head of Virtual Reality at the IGD.
As an example, below is a picture of several virtual humans on stage during a trade show (Credit: Fraunhofer IGD). This illustration was extracted from Der virtuelle Mensch (in German).
As you can guess, it will be difficult to achieve a convincing result.
The requirements placed on virtual humans are enormous: they need to interact socially, communicate verbally and non-verbally – in other words via speech, gestures and facial expressions, have a human, pleasant appearance and be credible in dialogue with the user. […] To achieve this, researchers are developing various modules to generate dialog, understand speech and for graphics output, interfacing these through a web-based approach.
Below you can see that the Fraunhofer researchers have paid great attention to realism with the hairstyles of two virtual humans (Credit: Fraunhofer IGD). Here is a link to a larger version.
The Virtual Human web page gives additional details.
With virtual humans a completely new quality of interactive systems can be achieved: instead of interacting by menus and input forms the dialogue with the computer will take place in a intuitive way using natural language and gestures. The virtual character confronts the user as a person, who is able to give intelligent and goal-oriented assistance and guidance through a work routine. Depending on the particular application one or more virtual characters take over different roles, mostly as they occur in teamwork or in natural discussion situations.
And what virtual humans will be able to do?
Potential applications for virtual humans are enormous: one area involves tutor support for students on e-learning courses, with the virtual human answering questions and giving help with problems — making the learning process on and with the computer a more enjoyable experience. Human-like characters are also ideal for dealing with issues that involve training social skills: for example a railway official can practice dealing with difficult customers with the help of the virtual human.
But when will these friendly servants replace stubborn machines? The Fraunhofer IGD doesn't give any answers, even if some prototypes have been demonstrated during trade shows.
Finally, if you want to know more about this project, you can read this paper from 2004 called "Virtual Human: Storytelling & Computer Graphics for a Virtual Human Platform" (PDF format, 10 pages, 797 KB).
Sources: Fraunhofer-Gesellschaft, March 2006; and various web sites
Mar 27, 2006
Via New Scientist
When people take on a virtual computer-persona, the look of their character, or avatar, has a profound effect on their behaviour, according to Nick Yee and Jeremy Bailenson of Stanford University in Palo Alto, California:
Are you a confident, square-jawed warrior or a height-conscious little goblin? If you ever take on a virtual computer persona, the look you opt for may have a profound effect on your behaviour.
Online, in virtual worlds and chat rooms where people create cartoons of themselves known as avatars, changing your image is as simple as making a few clicks of a mouse. As people alter the appearance of their avatars, does their behaviour unwittingly change too?
To answer this question, Nick Yee and Jeremy Bailenson of Stanford University in Palo Alto, California, assigned two groups of students an avatar each, using a virtual reality headset. They were given less than a minute to examine their new selves in a "mirror" before being asked to step into a virtual room with another avatar controlled by an independent helper.
Irrespective of their real-life height, some in the first group were assigned ...
Read the full article
Mar 20, 2006
Mar 17, 2006
Some game designers are discarding "heads-up displays," trying to create a more immersive environment by providing game data such as a player's health and ammo levels using subtler hints that are truer to life. Big mistake, according to Clive Thompson:
I let fly with a flurry of jabs, then lean in and deliver a sneaky uppercut. It connects perfectly -- I can hear the moist thwack of my boxing glove on my opponent's cheek. When he staggers back to his corner of the boxing ring, I admire my handiwork: Swollen eyes, drooling crimson -- it's like something Picasso might have painted if he worked with blood. I can tell that one more barrage is gonna win me the round.
And the thing is, I don't need to look at a "health bar" floating over my opponent to see he's nearly vanquished. Indeed, in this Xbox 360 version of Fight Night Round 3 there is no "heads-up display," or HUD, at all. Most action games rely on such an omnipresent overlay, floating on screen and showing how much ammo or health you've got left. But with Fight Night, you just have to pay close attention to the acoustic and visual cues -- the increasingly sluggish attacks of your fighter, the fatigue written on his face...
Continue to read the full article
Mar 16, 2006
From Charles T. Tart
The program for Toward a Science of Consciousness 2006, April 4-8, 2006, Tucson, Arizona has gone to press.
As usual, The Journal of Consciousness Studies will publish the indexed conference Program/Abstract book which will be available at the conference.
However the 310 accepted abstracts are now available online
Full conference program information is available at the conference website