By continuing your visit to this site, you accept the use of cookies. These ensure the smooth running of our services. Learn more.

Jul 31, 2006

Immersive Medical Telepresence conference

Phoenix, Arizona, 6-7 September 2006



From the conference website 

The Health Sciences, perhaps more than any other discipline, is dependent of images / video and the analysis of these images to be successful. This is true in biomedical research, health education and of course in clinical care. These images are increasingly digital and improving in resolution by orders of magnitude. This simultaneously allows more flexibility with the analysis of the image and much better analysis. The data resources are logically growing exponentially there is a growing need to share and compare images.

The ability to effectively use and share these resources is often a local issue with individual solutions being developed. There are exceptions and when they are discovered they are often treated as significant success stories. For example, the SUMMIT Project at Stanford University has created a significant set of stereoscopic and haptically enabled Digital Anatomy resources. These resources and the expertise at Stanford will be used across the nation and now internationally to teach anatomy courses.

As the health sciences continue to become more specialized and the educational resources become more difficult to locate, advanced networking that allows secure, reliable access to expertise and high quality resources is critical. The ability to create virtual organizations and collaborate with and among students, professors, researchers, and clinicians irrespective of location is of increasing value.

At the same time, large-capacity research networks (e.g., Internet2 and peered networks, GLORIAD) and high-quality video applications (e.g., DV-over-IP, developed by the WIDE Project and stereoscopic HD video-over-IP realized by GIST are making such virtual collaboration technically possible and financially affordable. However, the absence of (human) networking opportunities has hampered the development of sustainable testbeds and slowed the rate of innovation.

This workshop will focus on our ability to effectively use and manipulate image / video resources irrespective of their location. It will also showcase many emerging technologies relevant to the medical field. Most importantly, it will provide an opportunity for those in the medical community to learn alongside experts in the area of video technologies and large capacity networking about the challenges ahead and to begin a discussion about how those challenges can be met.


Applications of Virtual Reality Technology in the Measurement of Spatial Memory in Patients with Mood Disorders

In a letter to the Editor published in the current issue of CNS Spectr (2006 Jun; 11, 6), Holmes and coll. describe a novel VR-based paradigm to test spatial memory in patients with mood disorders.
Here is an excerpt from the letter:
The January 2006 CNS Spectrums included an article about virtual reality (VR) technology as a treatment option in psychiatry and Dr. Gorman welcomed letters discussing novel applications of VR in psychiatry. Much of the published work in this area is treatment-related. It appears that a limited number of researchers have considered using this technology for clinical assessment and research purposes. This is likely to change as immersive VR shows promise for increasing  ecological validity in assessment  and providing a much richer set of behavioural data.

In collaboration with the Informatics Research Institute (IRI) at Newcastle University in Newcastle upon Tyne, England, we are assessing the validity of this approach. The IRI manages an immersive VR suite, and our collaboration has allowed us to develop a novel paradigm to test spatial memory in patients with mood disorders. Our interest in spatial memory in this group stems from neuroimaging research reporting atrophy in the hippocampal region for patients with major depressive disorder and bipolar disorder. The hippocampus is involved in spatial memory, and individuals with hippocampal lesions are impaired on tasks of spatial memory.

The full text of the letter, including references, can be accessed here 

Mapping implied body actions in the human motor system

Mapping implied body actions in the human motor system.

J Neurosci. 2006 Jul 26;26(30):7942-9

Authors: Urgesi C, Moro V, Candidi M, Aglioti SM

The human visual system is highly tuned to perceive actual motion as well as to extrapolate dynamic information from static pictures of objects or creatures captured in the middle of motion. Processing of implied motion activates higher-order visual areas that are also involved in processing biological motion. Imagery and observation of actual movements performed by others engenders selective activation of motor and premotor areas that are part of a mirror-neuron system matching action observation and execution. By using single-pulse transcranial magnetic stimulation, we found that the mere observation of static snapshots of hands suggesting a pincer grip action induced an increase in corticospinal excitability as compared with observation of resting, relaxed hands, or hands suggesting a completed action. This facilitatory effect was specific for the muscle that would be activated during actual execution of the observed action. We found no changes in responsiveness of the tested muscles during observation of nonbiological entities with (e.g., waterfalls) or without (e.g., icefalls) implied motion. Thus, extrapolation of motion information concerning human actions induced a selective activation of the motor system. This indicates that overlapping motor regions are engaged in the visual analysis of physical and implied body actions. The absence of motor evoked potential modulation during observation of end posture stimuli may indicate that the observation-execution matching system is preferentially activated by implied, ongoing but not yet completed actions.

Jul 29, 2006

Simulated violence experiment

Via Omni Brain


According to this press release, psychologists at Iowa State have done the first study on violence desensitization from video games.

From the press release:

Ames, Iowa -- Research led by a pair of Iowa State University psychologists has proven for the first time that exposure to violent video games can desensitize individuals to real-life violence.

Nicholas Carnagey, an Iowa State psychology instructor and research assistant, and ISU Distinguished Professor of Psychology Craig Anderson collaborated on the study with Brad Bushman, a former Iowa State psychology professor now at the University of Michigan and Vrije Universiteit, Amsterdam.

They authored a paper titled "The Effects of Video Game Violence on Physiological Desensitization to Real-Life Violence," which was published in the current issue of the Journal of Experimental Social Psychology, a professional journal. In this paper, the authors define desensitization to violence as "a reduction in emotion-related physiological reactivity to real violence."

Their paper reports that past research -- including their own studies -- documents that exposure to violent video games increases aggressive thoughts, angry feelings, physiological arousal and aggressive behaviors, and decreases helpful behaviors. Previous studies also found that more than 85 percent of video games contain some violence, and approximately half of video games include serious violent actions.

Their latest study tested 257 college students (124 men and 133 women) individually. After taking baseline physiological measurements on heart rate and galvanic skin response -- and asking questions to control for their preference for violent video games and general aggression -- participants then played one of eight randomly assigned violent or non-violent video games for 20 minutes. The four violent video games were Carmageddon, Duke Nukem, Mortal Kombat or Future Cop; the non-violent games were Glider Pro, 3D Pinball, 3D Munch Man and Tetra Madness.

After playing a video game, a second set of five-minute heart rate and skin response measurements were taken. Participants were then asked to watch a 10-minute videotape of actual violent episodes taken from TV programs and commercially-released films in the following four contexts: courtroom outbursts, police confrontations, shootings and prison fights. Heart rate and skin response were monitored throughout the viewing.

When viewing real violence, participants who had played a violent video game experienced skin response measurements significantly lower than those who had played a non-violent video game. The participants in the violent video game group also had lower heart rates while viewing the real-life violence compared to the nonviolent video game group.

"The results demonstrate that playing violent video games, even for just 20 minutes, can cause people to become less physiologically aroused by real violence," said Carnagey. "Participants randomly assigned to play a violent video game had relatively lower heart rates and galvanic skin responses while watching footage of people being beaten, stabbed and shot than did those randomly assigned to play nonviolent video games.


See also a critique to this article by Ed Castronova, a "synthetic worlds" researcher.

21:05 Posted in Serious games | Permalink | Comments (0) | Tags: serious gaming

Brain-activity interpretation competition won by Italian researchers

Via Mind Hacks 



A team of three Italian researchers (Emanuele Olivetti, Diego Sona, and Sriharsha Veeramachaneni) won $10000 in a brain-activity interpretation competition. Entrants were provided with the fMRI data and behavioural reports recorded when four people watched two movies. The competitors' task was to create an algorithm that could use the viewers ongoing brain activity to predict what they were thinking and feeling as the film unfolded.

The Italian team resulted to be the most accurate, with a correlation of .86 for basic features, such as whether an instant of the film contained music. The full results are here.

The speed of sight

Via Medgadget


According to estimates by University of Pennsylvania researchers, information transmission of the human retina is as fast as a regular Ethernet connection:

Using an intact retina from a guinea pig, the researchers recorded spikes of electrical impulses from ganglion cells using a miniature multi-electrode array. The investigators calculate that the human retina can transmit data at roughly 10 million bits per second. By comparison, an Ethernet can transmit information between computers at speeds of 10 to 100 million bits per second...

The guinea pig retina was placed in a dish and then presented with movies containing four types of biological motion, for example a salamander swimming in a tank to represent an object-motion stimulus. After recording electrical spikes on an array of electrodes, the researchers classified each cell into one of two broad classes: "brisk" or "sluggish," so named because of their speed.

The researchers found that the electrical spike patterns differed between cell types. For example, the larger, brisk cells fired many spikes per second and their response was highly reproducible. In contrast, the smaller, sluggish cells fired fewer spikes per second and their responses were less reproducible.

But, what's the relationship between these spikes and information being sent? "It's the combinations and patterns of spikes that are sending the information. The patterns have various meanings," says co-author Vijay Balasubramanian, PhD, Professor of Physics at Penn. "We quantify the patterns and work out how much information they convey, measured in bits per second."

Calculating the proportions of each cell type in the retina, the team estimated that about 100,000 guinea pig ganglion cells transmit about 875,000 bits of information per second. Because sluggish cells are more numerous, they account for most of the information. With about 1,000,000 ganglion cells, the human retina would transmit data at roughly the rate of an Ethernet connection, or 10 million bits per second.


Read the press release


World Map of Happiness

Via Emerging Technology Trends

Adrian White, Analytic Social Psychologist at the University of Leicester has developed the first ever World Map of Happiness. He analysed data published by several organizations, including UNESCO, the CIA, the New Economics Foundation, and the WHO, to create a global projection of subjective well-being.

Below is a small version of this map (source: Emerging Technology Trends). On this map, red indicates a high level of happiness.

The first world map of happiness

A Flash version of this map is available here.



Via Emerging Technology Trends

Fingertip Digitizer is a novel haptic device which allows to interpret user's hand gestures and translate for the PC, medical devices or computer games. Developed at the University of Buffalo, Fingertip Digitizer should be on the market in 3 years.
From the website:
This new fingertip-mounted device uses miniature thin-film force sensors, a tri-axial accelerometer, and a motion tracker. A real-time, network-based, multi-rate data-acquisition system was developed using LabVIEW virtual instrumentation technology. Biomechanical models based on human-subject studies were applied to the sensing algorithm. A new 2D touch-based painting application (Touch Painter) was developed with a new portable, touch-based projection system (Touch Canvas). Also, a new 3D object-digitizing application (Tactile Tracer) was developed to determine object properties by dynamic tactile activities, such as rubbing, palpation, tapping, and nail-scratching.

Jul 28, 2006

Neurofeedback for children with ADHD: a comparison of SCP- and theta/beta-protocols

Neurofeedback for children with ADHD: a comparison of SCP- and theta/beta-protocols

Prax Kinderpsychol Kinderpsychiatr. 2006;55(5):384-407

Authors: Leins U, Hinterberger T, Kaller S, Schober F, Weber C, Strehl U

Research groups have consistently reported on behavioral and cognitive improvements of children with ADHD after neurofeedback. However, neurofeedback has not been commonly accepted as a treatment for ADHD. This is due, in part, to several methodological limitations. The neurofeedback literature is further complicated by having several different training protocols. Differences between the clinical efficacy of such protocols have not been examined. This study addresses previous methodological shortcomings while comparing the training of theta-beta-frequencies (theta-beta-group) with the training of slow cortical potentials (SCP-group). Each group comprised of 19 children with ADHD that were blind to group assignment. The training procedure consisted of 30 sessions and a six months follow-up training. Pre-/post measures at pretest, the end of the training and the follow-up included tests of attention, intelligence and behavioral variables. After having already reported intermediate data (Strehl et al. 2004), this paper gives account on final results: Both groups are able to voluntarily regulate cortical activity, with the extent of learned self-regulation depending on task and condition. Both groups improve in attention and IQ. Parents and teachers report significant behavioral and cognitive improvements. Clinical effects for both groups remain stable six months after training. Groups do not differ in behavioral or cognitive outcome variables.

Brain-computer interfaces for 1-D and 2-D cursor control

Brain-computer interfaces for 1-D and 2-D cursor control: designs using volitional control of the EEG spectrum or steady-state visual evoked potentials.

IEEE Trans Neural Syst Rehabil Eng
. 2006 Jun;14(2):225-9

Authors: Trejo LJ, Rosipal R, Matthews B

We have developed and tested two electroencephalogram (EEG)-based brain-computer interfaces (BCI) for users to control a cursor on a computer display. Our system uses an adaptive algorithm, based on kernel partial least squares classification (KPLS), to associate patterns in multichannel EEG frequency spectra with cursor controls. Our first BCI, Target Practice, is a system for one-dimensional device control, in which participants use biofeedback to learn voluntary control of their EEG spectra. Target Practice uses a KPLS classifier to map power spectra of 62-electrode EEG signals to rightward or leftward position of a moving cursor on a computer display. Three subjects learned to control motion of a cursor on a video display in multiple blocks of 60 trials over periods of up to six weeks. The best subject's average skill in correct selection of the cursor direction grew from 58% to 88% after 13 training sessions. Target Practice also implements online control of two artifact sources: 1) removal of ocular artifact by linear subtraction of wavelet-smoothed vertical and horizontal electrooculograms (EOG) signals, 2) control of muscle artifact by inhibition of BCI training during periods of relatively high power in the 40-64 Hz band. The second BCI, Think Pointer, is a system for two-dimensional cursor control. Steady-state visual evoked potentials (SSVEP) are triggered by four flickering checkerboard stimuli located in narrow strips at each edge of the display. The user attends to one of the four beacons to initiate motion in the desired direction. The SSVEP signals are recorded from 12 electrodes located over the occipital region. A KPLS classifier is individually calibrated to map multichannel frequency bands of the SSVEP signals to right-left or up-down motion of a cursor on a computer display. The display stops moving when the user attends to a central fixation point. As for Target Practice, Think Pointer also implements wavelet-based online removal of ocular artifact; however, in Think Pointer muscle artifact is controlled via adaptive normalization of the SSVEP. Training of the classifier requires about 3 min. We have tested our system in real-time operation in three human subjects. Across subjects and sessions, control accuracy ranged from 80% to 100% correct with lags of 1-5 s for movement initiation and turning. We have also developed a realistic demonstration of our system for control of a moving map display (http://ti.arc.nasa.gov/).

Retina projector to help blind people

From New Scientist

Partially blind people can now read using a machine that projects images directly onto their retinal cells.

The Retinal Imaging Machine Vision System (RIMVS) can also be used to explore virtual buildings, allowing people to familiarise themselves with new places.

The device, developed by Elizabeth Goldring, a poetry professor at the Massachusetts Institute of Technology, who is herself partially blind, is designed for people who suffer vision loss due to obstructions such as haemorrhages or diseases that erode the cells on the retina.

The user looks through a viewfinder and the images are focused directly onto the retina. The person can guide the light to find the areas that still work best. The machine uses LED light and costs just $4000.

Jul 27, 2006


Re-blogged from WMMNA 

The Mouthpiece has been designed to help strangers who find it difficult to express their feelings or opinions face to face. A small LCD monitor and loudspeakers are covering the mouth of the wearer like a gag. The equipment replaces the real act of speech with pre-recorded, edited and electronically perfected statements, questions, answers, stories, etc.


0omouthpc.jpg 1yeys.jpg

A mobile network that keeps track of everything you do

Re-blogged from Smart Mobs

AsiaMedia reports "Japan's No. 2 telecom operator KDDI Corp said yesterday that it had developed a server which keeps an electronic record of the smallest events in a person's life and lets others sift through them.

The Lifelog Pod jots down every activity made through a cellphone or computer, including taking photographs, searching for a restaurant, listening to music and managing money. While some may loathe the thought of an omniscient network, the company said it could provide a way to make friends."

Users can learn who else their friends chat with or delve through their companions" data -- minus areas protected by passwords -- to gauge their interests," a KDDI spokesman said.

"Your information is connected to that of your friend, and that of his friend, and so on.

"In this country of cellphone aficionados, cellphone users can also put their blogs on the common server. Only people who have a common connection -- such as a mutual friend -- will be able to access each other's data.

"This isn't a violation of privacy rights," the KDDI official said. "It is simply that everyone is connected."

Japan: A mobile network that keeps track of everything you do

A P300 event-related potential brain-computer interface

A P300 event-related potential brain-computer interface (BCI): The effects of matrix size and inter stimulus interval on performance.

Biol Psychol. 2006 Jul 21;

Authors: Sellers EW, Krusienski DJ, McFarland DJ, Vaughan TM, Wolpaw JR

We describe a study designed to assess properties of a P300 brain-computer interface (BCI). The BCI presents the user with a matrix containing letters and numbers. The user attends to a character to be communicated and the rows and columns of the matrix briefly intensify. Each time the attended character is intensified it serves as a rare event in an oddball sequence and it elicits a P300 response. The BCI works by detecting which character elicited a P300 response. We manipulated the size of the character matrix (either 3x3 or 6x6) and the duration of the inter stimulus interval (ISI) between intensifications (either 175 or 350ms). Online accuracy was highest for the 3x3 matrix 175-ms ISI condition, while bit rate was highest for the 6x6 matrix 175-ms ISI condition. Average accuracy in the best condition for each subject was 88%. P300 amplitude was significantly greater for the attended stimulus and for the 6x6 matrix. This work demonstrates that matrix size and ISI are important variables to consider when optimizing a BCI system for individual users and that a P300-BCI can be used for effective communication.

USA Today: Cisco to introduce telepresence in the Fall

From USA today 
In an interview with USA Today, Cisco Systems CEO John Chambers announced that his company will be introducing a new Telepresence offering in the Fall.
According to Chambers, the new telepresence system will make videoconferencing more lifelike, by using "lifesize" high-definition video and directional sound technology that makes voices seem to come from where a user is located at the remote site.

Read the full interview

Mediamatic Amsterdam

Via Networked Performance


RFID and The Internet of Things

After a succesfull CrashCourse in May, Mediamatic now presents a second workshop on RFID and The Internet of Things: 11, 12, 13 September 2006 :: Confirmed lecturers and trainers: Julian Bleecker (US), Timo Arnall (Norway) and Arie Altena (NL).

RFID allows for the unique identification of objects, and any kind of online data can be linked to these unique ID's. If RFID becomes an open web-based platform, and users can tag, share, and contribute content to the digital existence of their own places and objects, we can truly speak of an Internet of Things. This opens perpectives for new sustainability scenario's, for new relations between people and the stuff they have, and for other locative applications.

The participants of this workshop will develop critical, utopian or nightmarish concepts for an Internet of Things in a hands-on way. Ideas can range from scripts for small new rituals to outlines of societal changes of epic scale. Prototypes can be tested with the workshop tools The Symbolic Table or the Nokia3220 phone with RFID reader.

The workshop has room for 16 designers, artists, thinkers and makers. Participation fee is €350 per person, ex BTW. Lunches, technical equipment and assistance are included. If you want to participate in this workshop, please register at our online registration form.

The Endless Forest



"The Endless Forest by Tale of Tales is a game about beauty, wonder, calm and peace. There are no stealth missions, no guns to swap, no armour or enemies. Taking on the role of a somewhat dreamy deer who wanders through an endless forest imbued with magical powers that seem as unpredictable as mesmerising, players of The Endless Forest are invited to hang out and roam amidst beautiful trees, old mysterious ruins, an idyllic pond and happy flower beds. Without a goal of any sorts, they soon find that there's more to the forest than just mere eye candy." Maaike Lauwaert & Martijn Hendriks.
Download the Endless Forest 'social screensaver'.
Read full text.

Virtual Reality therapy for Iraq veterans

The Office of Naval Research has funded $4 million the Virtual Reality Medical Center in San Diego to develop virtual reality-based methods for treating post-traumatic stress disorder (PTSD) for war veterans.
Businessweek has an article about this project:
A therapist at the Naval Medical Center in San Diego, Calif., Wood monitors patients' heart and breathing rates and even how much they're sweating to see the effect of the virtual environments. The aim is to get patients to draw on their meditation training to regain perspective—and stay calm—when a stimulus causes an emotional response. "The idea being to be in the high-stimulus environment for a long period of time, maintaining low psycho-physiological arousal," Wood says. "The person then can take that learning in the therapeutic environment and transport it out or generalize it to day-to-day life."

There may be a great need for PTSD therapy among veterans of the war in Iraq. A 2004 study published in the New England Journal of Medicine estimates that PTSD afflicts about 18% of the troops in Iraq. That study took place early in the war, and the figure now may be higher, says Emory psychiatry professor Barbara Rothbaum. She co-owns Virtually Better, a virtual reality treatment company that has received funding from the ONR to develop and test a version of the therapy. "Things over the past few years have gotten even worse," she says. "I hope we're wrong, but I think everybody's expecting probably a higher rate of PTSD." There are currently 127,000 troops stationed in Iraq, according to the U.S. Defense Dept.

The point is not to retraumatize the patients but to allow the individuals to cope with painful experiences. "The concept here is that by doing this in a very modestly paced manner, the person feels little bits of anxiety as they go through this, but not at a level that overwhelms them," says Albert "Skip" Rizzo, a research scientist at the Institute for Creative Technologies. "Eventually they're actually in the Humvee, driving down the road, and children are by the side of the road, and an IED (improvised explosive device) goes off and there's body parts everywhere."


Read the whole article here...


12:25 Posted in Virtual worlds | Permalink | Comments (0) | Tags: cybertherapy

Designing VR Exposure Therapy Simulations for Post-Traumatic Stress Disorders

An article on Serious Game Source by Ari Hollander from Imprint describes how VR can be used to treat post-traumatic stress disorder:

"Virtual reality (VR) provides a tool that can allow therapists to gradually intensify a simulation of the traumatic events rather than relying on pure talk and storytelling. (..) At Imprint Interactive, we have been involved with a number of PTSD VR exposure therapy research projects (..)

These include a simulation of the tragic events of 9/11/01, a simulation of a terrorist bus bombing for a research group at the University of Haifa in Israel, and two simulations for treating U.S. soldiers returning from Middle East conflict. (..)

This is a familiar goal in both game design and VR. In game design we call the engagement process “fun” (..) . In VR we call the engagement process presence.

The (..) goal is to make the environment sufficiently reminiscent of the patient’s experience that it evokes memories of the traumatic events.

These applications include functionality that allows therapists to dynamically control the intensity of the experience for the patient, increasing or decreasing the level of stimulation and tension according to the level of anxiety.

Design Guidelines

Our job as virtual environment designers is to seek the sweet spot on the suspension of disbelief curve and avoid wasting resources that would only be dumped into the Uncanny Valley. In the case of VR Exposure Therapy applications I would suggest that the metaphor does double-duty and can also inform our selection of appropriate content to achieve reminiscence: we seek the sweet spot on the curve where we have included sufficient contextual details to evoke responses from a wide variety of patients without adding too much specific information that could distract from some patients’ experiences. (..)

- Favor the suggestive over the specific. (..)
- Use intentional ambiguity to cover a range of possible scenarios. (..)
- Use systemic designs and parallel information to specify and disambiguate. (..)

More than one researcher has reported that VR Exposure therapy patients, when recalling their therapy experiences, have occasionally described significant components of their experience in VR that were not actually present in the simulation!"

11:16 Posted in Virtual worlds | Permalink | Comments (0) | Tags: cybertherapy

Jul 25, 2006

UK youth addicted to mobile phones

Re-blogged from Smart Mobs

Young people value their mobile phone more than television, a new study has found. The survey, conducted by the LondonSchool of Economics and CarphoneWarehouse, polled over 16,500 young people in the UK to find out how mobile phones have changed the way we live.

Over a quarter of people aged 18-24 highlighted their mobile phone as more important than the internet, TV, MP3 player and games console. Only 11 per cent chose TV, while nearly half voted for the internet.

VNUnet has the complete story


1 2 3 4 Next