Ok

By continuing your visit to this site, you accept the use of cookies. These ensure the smooth running of our services. Learn more.

Dec 08, 2007

The emergence of motor imagery in children

The emergence of motor imagery in children.

J Exp Child Psychol. 2007 Nov 28;

Authors: Molina M, Tijus C, Jouen F

A total of 80 children (40 5-year-olds and 40 7-year-olds) took part in an experiment to evaluate their capacity to mentally evoke a motor image of their own displacement. Using a chronometry paradigm, movement duration was compared in a task where children were asked to move in order to take a puppet back to its home (actual) and to think about themselves executing the same action (virtual). Movement durations for actual and virtual displacements were obtained in two conditions, where either no information was provided about the weight of the puppet to be displaced (standard situation) or the puppet was described as being heavy (informed situation). A significant correlation between actual and virtual walking durations was observed for 7-year-olds in the informed condition. This result provides evidence for a motor imagery process emerging in 7-year-olds when children are required to think about themselves in action.

Dec 07, 2007

Toyota unveils robot violinist

Via Pink Tentacle

 

Toyota robot violinist --

 

Link

Video 

14:03 Posted in AI & robotics | Permalink | Comments (0)

Dec 04, 2007

The analgesic effects of opioids and immersive virtual reality distraction

The analgesic effects of opioids and immersive virtual reality distraction: evidence from subjective and functional brain imaging assessments.

Anesth Analg. 2007 Dec;105(6):1776-83, table of contents

Authors: Hoffman HG, Richards TL, Van Oostrom T, Coda BA, Jensen MP, Blough DK, Sharar SR

BACKGROUND: Immersive virtual reality (VR) is a novel form of distraction analgesia, yet its effects on pain-related brain activity when used adjunctively with opioid analgesics are unknown. We used subjective pain ratings and functional magnetic resonance imaging to measure pain and pain-related brain activity in subjects receiving opioid and/or VR distraction. METHODS: Healthy subjects (n = 9) received thermal pain stimulation and were exposed to four intervention conditions in a within-subjects design: (a) control (no analgesia), (b) opioid administration [hydromorphone (4 ng/mL target plasma level)], (c) immersive VR distraction, and (d) combined opioid + VR. Outcomes included subjective pain reports (0-10 labeled graphic rating scales) and blood oxygen level-dependent assessments of brain activity in five specific, pain-related regions of interest. RESULTS: Opioid alone significantly reduced subjective pain unpleasantness ratings (P < 0.05) and significantly reduced pain-related brain activity in the insula (P < 0.05) and thalamus (P < 0.05). VR alone significantly reduced both worst pain (P < 0.01) and pain unpleasantness (P < 0.01) and significantly reduced pain-related brain activity in the insula (P < 0.05), thalamus (P < 0.05), and SS2 (P < 0.05). Combined opioid + VR reduced pain reports more effectively than did opioid alone on all subjective pain measures (P < 0.01). Patterns of pain-related blood oxygen level-dependent activity were consistent with subjective analgesic reports. CONCLUSIONS: These subjective pain reports and objective functional magnetic resonance imaging results demonstrate converging evidence for the analgesic efficacy of opioid administration alone and VR distraction alone. Furthermore, patterns of pain-related brain activity support the significant subjective analgesic effects of VR distraction when used as an adjunct to opioid analgesia. These results provide preliminary data to support the clinical use of multimodal (e.g., combined pharmacologic and nonpharmacologic) analgesic techniques.

Simroid

Via Pink Tentacle 

Simroid --

 

Simroid is a robotic dental patient designed by Kokoro Company Ltd as a training tool for dentists.

The simulated patient can follow spoken instructions, closely monitor a dentist’s performance during mock treatments, and react in a human-like way to mouth pain thanks to mouth sensors.

video

Prosthetic Limbs That Can Feel

Via KurzweilAI.net

Researchers at Northwestern University, in Chicago, have shown that transplanting the nerves from an amputated hand to the chest allows patients to feel hand sensation there.

The findings are the first step toward prosthetic arms with sensors on the fingers that will transfer tactile information from the device to the chest, making the wearer feel as though he or she has a real hand.

Full article here 

Google Maps for mobile

Google has announced the new "My Location" service that uses cell-tower ID information as an alternative to GPS technology, which is not widely available on cell phones.

08:24 Posted in Locative media | Permalink | Comments (0) | Tags: locative media

Nov 26, 2007

Vodafone "InsideOut" connects phones to Second Life

Via Textually.org

slifelogo.jpeg

 

 

 

 

 

 

Vodafone offers a new service called "InsideOut" that allows interaction between characters in Second Life and real phones.

"Both voice calls and text messages can be ferried in and out of the game, with SMSes running a cool L$300 (around $1) and voice calls running L$300 per minute.

Calls and messages placed to Second Life, though, are billed at the same rate as they would be to a traditional German phone."

Nov 25, 2007

Brain2Robot

 
 
Researchers at the Fraunhofer Institute for Computer Architecture and Software Technology FIRST and the Charite hospital in Berlin have developed a new EEG-controlled robot arm, which might one day bring help to people with paralysis.
 
Electrodes attached to the patient's scalp measure the brain's electrical signals, which are amplified and transmitted to a computer. Highly efficient algorithms analyze these signals using a self-learning technique. The software is capable of detecting changes in brain activity that take place even before a movement is carried out. It can recognize and distinguish between the patterns of signals that correspond to an intention to raise the left or right hand, and extract them from the pulses being fired by millions of other neurons in the brain. These neural signal patterns are then converted into control instructions for the computer. "The aim of the project is to help people with severe motor disabilities to carry out everyday tasks. The advantage of our technology is that it is capable of translating an intended action directly into instructions for the computer," says team leader Florin Popescu. The Brain2Robot project has been granted around 1.3 million euros in research funding under the EU's sixth Framework Programme (FP6). Its focus lies on developing medical applications, in particular control systems for prosthetics, personal robots and wheelchairs. The researchers have also developed a "thought-controlled typewriter", a communication device that enables severely paralyzed patients to pick out letters of the alphabet and write texts. The robot arm could be ready for commercialization in just a few years' time.

 

 

Press release:Brain2Robot

Project page:Brain2Robot

Quasi-movements: A novel motor-cognitive phenomenon

Quasi-movements: A novel motor-cognitive phenomenon.

Neuropsychologia. 2007 Oct 22;

Authors: Nikulin VV, Hohlefeld FU, Jacobs AM, Curio G

We introduce quasi-movements and define them as volitional movements which are minimized by the subject to such an extent that finally they become undetectable by objective measures. They are intended as overt movements, but the absence of the measurable motor responses and the subjective experience make quasi-movements similar to motor imagery. We used the amplitude dynamics of electroencephalographic alpha oscillations as a marker of the regional involvement of cortical areas in three experimental tasks: movement execution, kinesthetic motor imagery, and quasi-movements. All three conditions were associated with a significant suppression of alpha oscillations over the sensorimotor hand area of the contralateral hemisphere. This suppression was strongest for executed movements, and stronger for quasi-movements than for motor imagery. The topography of alpha suppression was similar in all three conditions. Proprioceptive sensations related to quasi-movements contribute to the assumption that the "sense of movement" can originate from central efferent processes. Quasi-movements are also congruent with the postulated continuity between motor imagery and movement preparation/execution. We also show that in healthy subjects quasi-movements can be effectively used in brain-computer interface research leading to a significantly smaller classification error ( approximately 47% of relative decrease) in comparison to the errors obtained with conventionally used motor imagery strategies.

Cognitive enhancement on BMA

 
aced6fc81b104c9e9a5ff896840d1b28.jpg
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
The British Medical Association has just released a report on the ethical implications of using medical technology to enhance cognitive function and optimise the brain

Virtual Eve

 
Researchers from Massey University have created a virtual teacher called Eve, that can ask questions, give feedback, discuss solutions, and express emotions. To develop the software for this system the Massey team observed children and their interactions with teachers and captured them on thousands of images. From these images of facial expression, gestures and body movements they developed programs that would capture and recognise facial expression, body movement, and significant bio-signals such as heart rate and skin resistance.


 


(Massey University)

 

Video

Nov 19, 2007

Microsoft ESP Debuts as a Platform for Visual Simulation

Via CNN 

Microsoft announced plans for a new a visual simulation platform, Microsoft ESP, which uses gaming technology to enable use of simulation for learning and decision-making. 

From the CNN article:

As a platform technology, Microsoft ESP provides a PC-based simulation engine, a comprehensive set of tools, applications programming interfaces, documentation to support code development, content integration and scenario-building capabilities, along with an extensive base of world content that can be tailored for custom solutions. Partners and developers can add structured experiences or missions, content such as terrain and scenery, scenarios, and hardware devices to augment existing solutions, or they can build and deploy new solutions that address the mission-critical requirements of their customers.

To support high-fidelity, dynamic, 3-D immersive experiences, Microsoft ESP includes geographical, cultural, environmental and rich scenery data along with tools for placing objects, scenery and terrain customization, object activation, special effects, and environmental controls including adjustable weather.

Nov 18, 2007

Smart Phone Suggests Things to Do

1f233d0ded10949357f354c6cb47ebf3.jpg

 

 

 

 

 

 

researchers at Palo Alto Research Center (PARC) have developed a mobile application, called Magitti, that uses a combination of cues - including the time of day, a person's location, her past behaviors, and even her text messages - to infer her interests. It then shows a helpful list of suggestions, including concerts, movies, bookstores, and restaurants

http://www.news.com/2300-1039_3-6210534-1.html

The role of psychophysiology in forensic assessments: Deception detection, ERPs, and virtual reality mock crime scenarios

The role of psychophysiology in forensic assessments: Deception detection, ERPs, and virtual reality mock crime scenarios.

Psychophysiology. 2007 Nov 7;

Authors: Mertens R, Allen JJ

Few data are available to address whether the use of ERP-based deception detection alternatives have sufficient validity for applied use. The present study was designed to replicate and extend J. P. Rosenfeld, M. Soskins, G. Bosh, and A. Ryan's (2004) study by utilizing a virtual reality crime scenario to determine whether ERP-based procedures, including brain fingerprinting, can be rendered less effective by participant manipulation by employing a virtual reality crime scenario and multiple countermeasures. Bayesian and bootstrapping analytic approaches were used to classify individuals as guilty or innocent. Guilty subjects were detected significantly less frequently compared to previous studies; countermeasures further reduced the overall hit rates. Innocent participants remained protected from being falsely accused. Reaction times did not prove suitable for accurate classification. Results suggested that guilty verdicts from ERP-based deception detection approaches are likely to be accurate, but that innocent (or indeterminate) verdicts yield no useful interpretation in an applied setting.

Virtual reality hardware and graphic display options for brain-machine interfaces

Virtual reality hardware and graphic display options for brain-machine interfaces.

J Neurosci Methods. 2007 Sep 29;

Authors: Marathe AR, Carey HL, Taylor DM

Virtual reality hardware and graphic displays are reviewed here as a development environment for brain-machine interfaces (BMIs). Two desktop stereoscopic monitors and one 2D monitor were compared in a visual depth discrimination task and in a 3D target-matching task where able-bodied individuals used actual hand movements to match a virtual hand to different target hands. Three graphic representations of the hand were compared: a plain sphere, a sphere attached to the fingertip of a realistic hand and arm, and a stylized pacman-like hand. Several subjects had great difficulty using either stereo monitor for depth perception when perspective size cues were removed. A mismatch in stereo and size cues generated inappropriate depth illusions. This phenomenon has implications for choosing target and virtual hand sizes in BMI experiments. Target-matching accuracy was about as good with the 2D monitor as with either 3D monitor. However, users achieved this accuracy by exploring the boundaries of the hand in the target with carefully controlled movements. This method of determining relative depth may not be possible in BMI experiments if movement control is more limited. Intuitive depth cues, such as including a virtual arm, can significantly improve depth perception accuracy with or without stereo viewing.

Nov 17, 2007

Consciousness Reframed 9

 
1490f2ff6c4b477a8d562a9595f9f73e.jpg
 
 
 
 
 
 
 
 
 
 
 
 
 
Consciousness Reframed 9: Vienna: July 3-5, 2008
 
Call for Papers - Consciousness Reframed is an international research conference that was first convened in 1997, and is now in its 9th incarnation. It is a forum for transdisciplinary inquiry into art, science, technology and consciousness, drawing upon the expertise and insights of artists, architects, performers, musicians, writers, scientists, and scholars, usually from at least 20 countries. Recent past conferences were convened in Beijing and Perth, Western Australia. This year, the conference will be held on the main campus of the University of Applied Arts Vienna, Austria. The conference will include researchers associated with the Planetary Collegium, which has its CAiiA- hub at Plymouth and nodes at the Nuova Accademia di Belle Arte, Milan, and the Zurcher Hochschule der Kunste, Zurich.

Call for Papers: New Realities: Being Syncretic - We cordially invite submissions from artists, theorists and researchers engaged in exploring the most urgent issues in the realm of hybrid inquiries into the fields of art, science, technology and society through theory and practice alike. We specifically encourage submissions that re-frame the concept of innovation in its relationship to progress and change within the context of perception and its transformation.

The Conference will be accompanied by a Book of Abstracts and the Conference Proceedings including full papers and a DVD, due to be released autumn 2008 by the renowned scientific publisher SpringerWienNewYork.

Cisco Experimenting with an On-Stage Telepresence Experience

Via Human Productivity Lab

  

 

Cisco demonstrated an "On-Stage" Telepresence experience at the launch of their Globilization Center East in Bangalore, India.  During a presentation to the media in Bangalore, Cisco CEO John Chambers "beamed up" Marthin De Beer, Senior Vice President of Emerging Technology Group at Cisco, and Chuck Stucki the General Manager of the Telepresence Business Unit from San Jose. The photorealistic and lifesize virtual duo from San Jose then interacted with the Cisco CEO and presented to the audience in India.  You can check out a video of the launch of the Cisco Globalization Center East which includes the stand up telepresence experience on the Cisco video website here:
http://tools.cisco.com/cmn/jsp/index.jsp?id=67656

Nov 04, 2007

Beat wartime empathy device

Via Pasta & Vinegar

97a874d5853b773b58d3b2e2f8d783f9.jpg

 

 

 

 

 

 

 

 

Designer Dominic Muren has created a device that allows a civilian to feel the heartbeat of a soldier:

I think we can all agree that war has become too impersonal. Media coverage emphasizes our distance, and most decision makers in congress don't have children who fight. Beat connects you very directly to a single soldier by thumping their recorded heartbeat against your chest. If they are calm, or worried, or under stress, you feel it. If they die, the heartbeat you feel dies too. If we are going to continue to fight wars, we need better methods of feedback like this one so the costs are more visceral and real for us. Imagine if all politicians were required to wear one of these!!

Time Magazine names Apple iPhone `Invention of the Year'

iphone_tout_a.jpg

TIME magazine has named the iPhone "Invention of the Year."

19:15 Posted in Wearable & mobile | Permalink | Comments (0) | Tags: mobile, wearable

Tactile Video Displays

Via Medgadget

a173654a9ae946cf1cfed77ea782c02d.jpg

Researchers at National Institute of Standards and Technology have developed a tactile graphic display to help visually impaired people to perceive images.

From the NIST press release

ELIA Life Technology Inc. of New York, N.Y., licensed for commercialization both the tactile graphic display device and fingertip graphic reader developed by NIST researchers. The former, first introduced as a prototype in 2002, allows a person to feel a succession of images on a reusable surface by raising some 3,600 small pins (actuator points) into a pattern that can be locked in place, read by touch and then reset to display the next graphic in line. Each image-from scanned illustrations, Web pages, electronic books or other sources-is sent electronically to the reader where special software determines how to create a matching tactile display. (For more information, see "NIST 'Pins' Down Imaging System for the Blind".)

An array of about 100 small, very closely spaced (1/10 of a millimeter apart) actuator points set against a user's fingertip is the key to the more recently created "tactile graphic display for localized sensory stimulation." To "view" a computer graphic with this technology, a blind or visually impaired person moves the device-tipped finger across a surface like a computer mouse to scan an image in computer memory. The computer sends a signal to the display device and moves the actuators against the skin to "translate" the pattern, replicating the sensation of the finger moving over the pattern being displayed. With further development, the technology could possibly be used to make fingertip tactile graphics practical for virtual reality systems or give a detailed sense of touch to robotic control (teleoperation) and space suit gloves.

 

Press release: NIST Licenses Systems to Help the Blind 'See' Images