By continuing your visit to this site, you accept the use of cookies. These ensure the smooth running of our services. Learn more.

Nov 17, 2007

Consciousness Reframed 9

Consciousness Reframed 9: Vienna: July 3-5, 2008
Call for Papers - Consciousness Reframed is an international research conference that was first convened in 1997, and is now in its 9th incarnation. It is a forum for transdisciplinary inquiry into art, science, technology and consciousness, drawing upon the expertise and insights of artists, architects, performers, musicians, writers, scientists, and scholars, usually from at least 20 countries. Recent past conferences were convened in Beijing and Perth, Western Australia. This year, the conference will be held on the main campus of the University of Applied Arts Vienna, Austria. The conference will include researchers associated with the Planetary Collegium, which has its CAiiA- hub at Plymouth and nodes at the Nuova Accademia di Belle Arte, Milan, and the Zurcher Hochschule der Kunste, Zurich.

Call for Papers: New Realities: Being Syncretic - We cordially invite submissions from artists, theorists and researchers engaged in exploring the most urgent issues in the realm of hybrid inquiries into the fields of art, science, technology and society through theory and practice alike. We specifically encourage submissions that re-frame the concept of innovation in its relationship to progress and change within the context of perception and its transformation.

The Conference will be accompanied by a Book of Abstracts and the Conference Proceedings including full papers and a DVD, due to be released autumn 2008 by the renowned scientific publisher SpringerWienNewYork.

Jul 06, 2007

Disembodiment in Online Social Interaction

Disembodiment in Online Social Interaction: Impact of Online Chat on Social Support and Psychosocial Well-Being
Cyberpsychology & Behaviour, Jun 2007, Vol. 10, No. 3 : 475 -477

Seok Kang, Ph.D.
This study investigates how disembodiment—that is, transcendence of body constraints in cyberspace—in online chat affects social psychological well-being. The results demonstrate that disembodiment is a strong predictor of increased loneliness and depression, and decreased social support. However, the amount of chat use is a positive contributor to decreased offline estrangement and depression, and increased happiness. These contrasting results suggest that online chat use is a technology for social connection used for offline connectivity, but the disembodiment motive is associated with declines in social support and psychosocial well-being. The investigation of specified motives for online interaction, personal competency, or advanced technological alternatives in interaction is suggested for future research on the effects of online interaction on offline outcomes.

Dec 03, 2006

Presence: a unique characteristic in educational virtual environments

Via VRoot

Presence: a unique characteristic in educational virtual environments

Virtual Reality Journal, Volume 10, Number 3-4 / December, 2006, Pages 197-206.

Author: Tassos A. Mikropoulos

This article investigates the effect of presence on learning outcomes in educational virtual environments (EVEs) in a sample of 60 pupils aged between 11 and 13 years. We study the effect of personal presence, social presence and participant’s involvement on certain learning outcomes. We also investigate if the combination of the participant’s representation model in the virtual environment (VE) with the way it is presented gives a higher sense of presence that contributes to learning outcomes. Our results show that the existence of an avatar as the pupils’ representation enhanced presence and helped them to successfully perform their learning tasks. The pupils had a high sense of presence for both cases of the EVE presentation, projection on a wall and through a head mounted display (HMD). Our socialized virtual environment seems to play an important role in learning outcomes. The pupils had a higher sense of presence and completed their learning tasks more easily and successfully in the case of their egocentric representation model using the HMD.

Oct 05, 2006

Teliris Launches VirtuaLive with HSL's Thoughts and Analysis

Via Human Productivity Lab

GlobalTable VirtuaLive 360 Front.jpg

Teliris, a company that develops telepresence solutions, has announced its 4th generation offering. From the press release:

The VirtuaLive(TM) enhanced technology provides the most natural and intimate virtual meeting environment on the market, and is available in a broad set of room offerings designed to meet the specific needs of its customers.

Building on Teliris' third generation GlobalTable(TM) telepresence solutions, VirtuaLive(TM) provides enhanced quality video and broadband audio, realistically replicating an in-person meeting experience by capturing and transmitting the most subtle visual gestures and auditory cues.

"All future Teliris solutions will fall under the VirtuaLive(TM) umbrella of offerings," said Marc Trachtenberg, Teliris CEO. "With such an advanced technology platform and range of solutions, companies can select the immersive experience that best fits their business environment and goals."

VirtuaLive's(TM) next generation of Virtual Vectoring(TM) is at the center of the new offerings. It provides users with unparalleled eye-to-eye contact from site-to-site in multipoint meetings with various numbers of participants within each room. No other vendor offering can match the natural experience created by advanced technology in such diverse environments.

Social behavior and norms in virtual environments are comparable to those in the physical world

Nick Yee and colleagues at Stanford University have investigated whether social behavior and norms in virtual environments are comparable to those in the physical world. To this end, they collected data from avatars in Second Life, in order to explore whether social norms of gender, interpersonal distance (IPD), and eye gaze transfer into virtual environments even though the modality of movement is entirely different.
"Results showed that established findings of IPD and eye gaze transfer into virtual environments: 1) Malemale dyads have larger IPDs than female-female dyads, 2) male-male dyads maintain less eye contact than female-female dyads, and 3) decreases in IPD are compensated with gaze avoidance"
According to Yee and coll., these findings suggest that social interactions in online virtual environments are governed by the same social norms as social interactions in the physical world.

Yee, N., Bailenson, J.N. & Urbanek, M. (2006). The unbearable likeness of being digital: The persistence of nonverbal social norms in online virtual environments. Cyberspace and Behaviour, In Press.


Jul 05, 2006

Awareness and Accountability in MM Online Worlds

Via Pasta and Vinegar

Doing Virtually Nothing: Awareness and Accountability in Massively Multiplayer Online Worlds

Robert J. Moore , Nicolas Ducheneaut and Eric Nickell.

To date the most popular and sophisticated types of virtual worlds can be found in the area of video gaming, especially in the genre of Massively Multiplayer Online Role Playing Games (MMORPG). Game developers have made great strides in achieving game worlds that look and feel increasingly realistic. However, despite these achievements in the visual realism of virtual game worlds, they are much less sophisticated when it comes to modeling face-to-face interaction. In face-to-face, ordinary social activities are "accountable," that is, people use a variety of kinds of observational information about what others are doing in order to make sense of others' actions and to tightly coordinate their own actions with others. Such information includes: (1) the real-time unfolding of turns-at-talk; (2) the observability of embodied activities; and (3) the direction of eye gaze for the purpose of gesturing. But despite the fact that today's games provide virtual bodies, or "avatars," for players to control, these avatars display much less information about players' current state than real bodies do. In this paper, we discuss the impact of the lack of each type of information on players' ability to tightly coordinate their activities and offer guidelines for improving coordination and, ultimately, the players' social experience.


Jun 18, 2006

bio-sensor server

via Information Aesthetic



bodydaemon is a bio-responsive web server created by media artist Carlo Castellanos, which uses biofeedback sensors to change in realtime its configuration, according to the participant's psychophysiological states

From the project's website:

BodyDaemon is a bio-responsive Internet server. Readings taken from a participant's physical states, as measured by custom biofeedback sensors, are used to power and configure a fully-functional Internet server. For example, more or fewer socket connections are made available based on heart rate, changes in galvanic skin response (GSR) can abruptly close sockets, and muscle movements (EMG) can send data to the client. Other feature's such as logging can be turned on or off depending on a combination of factors. BodyDaemon also includes a client application that makes requests to the BodyDaemon server. The client requests and server responses are sent over a "persistent" or open socket. The client can thus use the data to continuously visualize, sonify or otherwise render the live bio-data. This project is part of larger investigations focusing on the development of protocols for the transfer of live physiological and biological information across the Internet.

BodyDaemon represents the early stages of investigations into the viability of systems that alter their states based off of a person's changing physiological states and intentions - with the ultimate goal of accommodating the development of emergent states of mutual influence between human and machine in a networked ecosystem.


Jun 04, 2006

Digital Chameleons

Digital Chameleons: Automatic Assimilation of Nonverbal Gestures in Immersive Virtual Environments

Jeremy N. Bailenson and Nick Yee

Psychological Science, 16 (10)

Previous research demonstrated social influence resulting from mimicry (the chameleon effect); a confederate who mimicked participants was more highly regarded than a confederate who did not, despite the fact that participants did not explicitly notice the mimicry. In the current study, participants interacted with an embodied artificial intelligence agent in immersive virtual reality. The agent either mimicked a participant’s head movements at a 4-s delay or utilized prerecorded movements of another participant as it verbally presented an argument. Mimicking agents were more persuasive and received more positive trait ratings than nonmimickers, despite participants’ inability to explicitly detect the mimicry. These data are uniquely powerful because they demonstrate the ability to use automatic, indiscriminate mimicking (i.e., a computer algorithm blindly applied to allmovements) to gain social influence. Furthermore, this is the first study to demonstrate social influence effects with a nonhuman, nonverbal mimicker.

Download the full paper here 

Feb 22, 2006

Social puppet to train soldiers

Via Smart Mobs

'Social Puppet,' is a 3D simulation program created to help soldiers learn unfamiliar languages by interacting with animated characters. For this project, financed by DARPA, the researchers have given on-screen characters human non-verbal communication behaviors.

The software has been designed by Hannes Högni Vilhjálmsson of the University of Southern California (USC) Information Sciences Institute.

read full article here

Jan 04, 2006

NeuroImage special issue on social cognitive neuroscience

Many relevant articles on social cognitive neuroscience were published in the December 2005 special issue of NeuroImage

Nov 17, 2005

Video chat software turns users into live avatars

via the Presence Listserv

Oki Electric Industry Co. Ltd. has developed technology that can add animated faces to instant messaging, networked gaming, and other real-time communications used on mobile phones and PCs. Oki's "FaceCommunicator" software leverages technology similar to the company's face recognition software that recognizes handhelds' owners. The technology supports Linux-based mobile phones.

FaceCommunicator is touted as useful for maintaining privacy and security during first time "face-to-face" communications over video phones, mobile phones, or in IM or chat-room chats on the Internet. In addition, the facial animations let users express emotions that might be hard to express in words, Oki says.

The technology can take advantage of four sources of user input to generate and control its transmitted animated faces - video images from PC or mobile phone cameras; voice; text; and mouse/keyboard commands.

Both the animated face and a background image can be selected by the user to suit the need of the moment. Additionally, certain of the animated faces can move their eyebrows and mouths as though talking, which adds a virtual reality dimension to communications, according to Oki.

Read full article

Nov 08, 2005

Sending and receiving emotions

Via Networked performance

eMoto is a mobile messaging service for sending and receiving affective messages. The application extends on both the input and output channels when sending text messages between mobile phones. The aim is to convey more of the emotional content through the very narrow channel that a text message otherwise provides. Emotional communication between people meeting physically in the "real world" make use of many different channels, such as facial expression, body posture, gestures, or tone of voice, little of this physicality of emotions is used in a similar digital context. In eMoto users therefore use affective gestures to convey the emotional content of their messages which are then translated and communicated in colors, shapes and animations.

Oct 21, 2005

Face recognition implemented on cellphones

Via Extremetech

Deviceforge.com today reports that a company has just started marketing a technology that inexpensively adds face recognition to camera-equipped cellphones. Oki's "Face Sensing Engine" (FSE) "middleware" decodes facial images within 280mS on a 100MHz ARM9 processor, and can restrict access to mobile devices by recognizing their owners.

Besides security applications, this new technology could have interesting applications in the field of neurorehabilitation. For example, the implementation of face recognition may help brain-injured patients suffering from neurodegenerative diseases to recognize their relatives and friends

Oki lists the following key features of its FSE technology:

  • Compact system footprint -- requires approximately 260KB on an ARM9 processor
  • Fast image processing -- requires approximately 280mS on a 100MHz ARM9 processor
  • Face recognition algorithm automatically adjusts to ambient lighting conditions
  • Supported processor architectures -- ARM9, SH Mobile, and others
  • Supported software platforms -- Symbian, uITRON, Linux, BREW, WIPI, Windows, Solaris, and others