Ok

By continuing your visit to this site, you accept the use of cookies. These ensure the smooth running of our services. Learn more.

Nov 26, 2007

Vodafone "InsideOut" connects phones to Second Life

Via Textually.org

slifelogo.jpeg

 

 

 

 

 

 

Vodafone offers a new service called "InsideOut" that allows interaction between characters in Second Life and real phones.

"Both voice calls and text messages can be ferried in and out of the game, with SMSes running a cool L$300 (around $1) and voice calls running L$300 per minute.

Calls and messages placed to Second Life, though, are billed at the same rate as they would be to a traditional German phone."

Nov 25, 2007

Brain2Robot

 
 
Researchers at the Fraunhofer Institute for Computer Architecture and Software Technology FIRST and the Charite hospital in Berlin have developed a new EEG-controlled robot arm, which might one day bring help to people with paralysis.
 
Electrodes attached to the patient's scalp measure the brain's electrical signals, which are amplified and transmitted to a computer. Highly efficient algorithms analyze these signals using a self-learning technique. The software is capable of detecting changes in brain activity that take place even before a movement is carried out. It can recognize and distinguish between the patterns of signals that correspond to an intention to raise the left or right hand, and extract them from the pulses being fired by millions of other neurons in the brain. These neural signal patterns are then converted into control instructions for the computer. "The aim of the project is to help people with severe motor disabilities to carry out everyday tasks. The advantage of our technology is that it is capable of translating an intended action directly into instructions for the computer," says team leader Florin Popescu. The Brain2Robot project has been granted around 1.3 million euros in research funding under the EU's sixth Framework Programme (FP6). Its focus lies on developing medical applications, in particular control systems for prosthetics, personal robots and wheelchairs. The researchers have also developed a "thought-controlled typewriter", a communication device that enables severely paralyzed patients to pick out letters of the alphabet and write texts. The robot arm could be ready for commercialization in just a few years' time.

 

 

Press release:Brain2Robot

Project page:Brain2Robot

Quasi-movements: A novel motor-cognitive phenomenon

Quasi-movements: A novel motor-cognitive phenomenon.

Neuropsychologia. 2007 Oct 22;

Authors: Nikulin VV, Hohlefeld FU, Jacobs AM, Curio G

We introduce quasi-movements and define them as volitional movements which are minimized by the subject to such an extent that finally they become undetectable by objective measures. They are intended as overt movements, but the absence of the measurable motor responses and the subjective experience make quasi-movements similar to motor imagery. We used the amplitude dynamics of electroencephalographic alpha oscillations as a marker of the regional involvement of cortical areas in three experimental tasks: movement execution, kinesthetic motor imagery, and quasi-movements. All three conditions were associated with a significant suppression of alpha oscillations over the sensorimotor hand area of the contralateral hemisphere. This suppression was strongest for executed movements, and stronger for quasi-movements than for motor imagery. The topography of alpha suppression was similar in all three conditions. Proprioceptive sensations related to quasi-movements contribute to the assumption that the "sense of movement" can originate from central efferent processes. Quasi-movements are also congruent with the postulated continuity between motor imagery and movement preparation/execution. We also show that in healthy subjects quasi-movements can be effectively used in brain-computer interface research leading to a significantly smaller classification error ( approximately 47% of relative decrease) in comparison to the errors obtained with conventionally used motor imagery strategies.

Cognitive enhancement on BMA

 
aced6fc81b104c9e9a5ff896840d1b28.jpg
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
The British Medical Association has just released a report on the ethical implications of using medical technology to enhance cognitive function and optimise the brain

Virtual Eve

 
Researchers from Massey University have created a virtual teacher called Eve, that can ask questions, give feedback, discuss solutions, and express emotions. To develop the software for this system the Massey team observed children and their interactions with teachers and captured them on thousands of images. From these images of facial expression, gestures and body movements they developed programs that would capture and recognise facial expression, body movement, and significant bio-signals such as heart rate and skin resistance.


 


(Massey University)

 

Video

Nov 19, 2007

Microsoft ESP Debuts as a Platform for Visual Simulation

Via CNN 

Microsoft announced plans for a new a visual simulation platform, Microsoft ESP, which uses gaming technology to enable use of simulation for learning and decision-making. 

From the CNN article:

As a platform technology, Microsoft ESP provides a PC-based simulation engine, a comprehensive set of tools, applications programming interfaces, documentation to support code development, content integration and scenario-building capabilities, along with an extensive base of world content that can be tailored for custom solutions. Partners and developers can add structured experiences or missions, content such as terrain and scenery, scenarios, and hardware devices to augment existing solutions, or they can build and deploy new solutions that address the mission-critical requirements of their customers.

To support high-fidelity, dynamic, 3-D immersive experiences, Microsoft ESP includes geographical, cultural, environmental and rich scenery data along with tools for placing objects, scenery and terrain customization, object activation, special effects, and environmental controls including adjustable weather.

Nov 18, 2007

Smart Phone Suggests Things to Do

1f233d0ded10949357f354c6cb47ebf3.jpg

 

 

 

 

 

 

researchers at Palo Alto Research Center (PARC) have developed a mobile application, called Magitti, that uses a combination of cues - including the time of day, a person's location, her past behaviors, and even her text messages - to infer her interests. It then shows a helpful list of suggestions, including concerts, movies, bookstores, and restaurants

http://www.news.com/2300-1039_3-6210534-1.html

The role of psychophysiology in forensic assessments: Deception detection, ERPs, and virtual reality mock crime scenarios

The role of psychophysiology in forensic assessments: Deception detection, ERPs, and virtual reality mock crime scenarios.

Psychophysiology. 2007 Nov 7;

Authors: Mertens R, Allen JJ

Few data are available to address whether the use of ERP-based deception detection alternatives have sufficient validity for applied use. The present study was designed to replicate and extend J. P. Rosenfeld, M. Soskins, G. Bosh, and A. Ryan's (2004) study by utilizing a virtual reality crime scenario to determine whether ERP-based procedures, including brain fingerprinting, can be rendered less effective by participant manipulation by employing a virtual reality crime scenario and multiple countermeasures. Bayesian and bootstrapping analytic approaches were used to classify individuals as guilty or innocent. Guilty subjects were detected significantly less frequently compared to previous studies; countermeasures further reduced the overall hit rates. Innocent participants remained protected from being falsely accused. Reaction times did not prove suitable for accurate classification. Results suggested that guilty verdicts from ERP-based deception detection approaches are likely to be accurate, but that innocent (or indeterminate) verdicts yield no useful interpretation in an applied setting.

Virtual reality hardware and graphic display options for brain-machine interfaces

Virtual reality hardware and graphic display options for brain-machine interfaces.

J Neurosci Methods. 2007 Sep 29;

Authors: Marathe AR, Carey HL, Taylor DM

Virtual reality hardware and graphic displays are reviewed here as a development environment for brain-machine interfaces (BMIs). Two desktop stereoscopic monitors and one 2D monitor were compared in a visual depth discrimination task and in a 3D target-matching task where able-bodied individuals used actual hand movements to match a virtual hand to different target hands. Three graphic representations of the hand were compared: a plain sphere, a sphere attached to the fingertip of a realistic hand and arm, and a stylized pacman-like hand. Several subjects had great difficulty using either stereo monitor for depth perception when perspective size cues were removed. A mismatch in stereo and size cues generated inappropriate depth illusions. This phenomenon has implications for choosing target and virtual hand sizes in BMI experiments. Target-matching accuracy was about as good with the 2D monitor as with either 3D monitor. However, users achieved this accuracy by exploring the boundaries of the hand in the target with carefully controlled movements. This method of determining relative depth may not be possible in BMI experiments if movement control is more limited. Intuitive depth cues, such as including a virtual arm, can significantly improve depth perception accuracy with or without stereo viewing.

Nov 17, 2007

Consciousness Reframed 9

 
1490f2ff6c4b477a8d562a9595f9f73e.jpg
 
 
 
 
 
 
 
 
 
 
 
 
 
Consciousness Reframed 9: Vienna: July 3-5, 2008
 
Call for Papers - Consciousness Reframed is an international research conference that was first convened in 1997, and is now in its 9th incarnation. It is a forum for transdisciplinary inquiry into art, science, technology and consciousness, drawing upon the expertise and insights of artists, architects, performers, musicians, writers, scientists, and scholars, usually from at least 20 countries. Recent past conferences were convened in Beijing and Perth, Western Australia. This year, the conference will be held on the main campus of the University of Applied Arts Vienna, Austria. The conference will include researchers associated with the Planetary Collegium, which has its CAiiA- hub at Plymouth and nodes at the Nuova Accademia di Belle Arte, Milan, and the Zurcher Hochschule der Kunste, Zurich.

Call for Papers: New Realities: Being Syncretic - We cordially invite submissions from artists, theorists and researchers engaged in exploring the most urgent issues in the realm of hybrid inquiries into the fields of art, science, technology and society through theory and practice alike. We specifically encourage submissions that re-frame the concept of innovation in its relationship to progress and change within the context of perception and its transformation.

The Conference will be accompanied by a Book of Abstracts and the Conference Proceedings including full papers and a DVD, due to be released autumn 2008 by the renowned scientific publisher SpringerWienNewYork.

Cisco Experimenting with an On-Stage Telepresence Experience

Via Human Productivity Lab

  

 

Cisco demonstrated an "On-Stage" Telepresence experience at the launch of their Globilization Center East in Bangalore, India.  During a presentation to the media in Bangalore, Cisco CEO John Chambers "beamed up" Marthin De Beer, Senior Vice President of Emerging Technology Group at Cisco, and Chuck Stucki the General Manager of the Telepresence Business Unit from San Jose. The photorealistic and lifesize virtual duo from San Jose then interacted with the Cisco CEO and presented to the audience in India.  You can check out a video of the launch of the Cisco Globalization Center East which includes the stand up telepresence experience on the Cisco video website here:
http://tools.cisco.com/cmn/jsp/index.jsp?id=67656

Nov 04, 2007

Beat wartime empathy device

Via Pasta & Vinegar

97a874d5853b773b58d3b2e2f8d783f9.jpg

 

 

 

 

 

 

 

 

Designer Dominic Muren has created a device that allows a civilian to feel the heartbeat of a soldier:

I think we can all agree that war has become too impersonal. Media coverage emphasizes our distance, and most decision makers in congress don't have children who fight. Beat connects you very directly to a single soldier by thumping their recorded heartbeat against your chest. If they are calm, or worried, or under stress, you feel it. If they die, the heartbeat you feel dies too. If we are going to continue to fight wars, we need better methods of feedback like this one so the costs are more visceral and real for us. Imagine if all politicians were required to wear one of these!!

Time Magazine names Apple iPhone `Invention of the Year'

iphone_tout_a.jpg

TIME magazine has named the iPhone "Invention of the Year."

19:15 Posted in Wearable & mobile | Permalink | Comments (0) | Tags: mobile, wearable

Tactile Video Displays

Via Medgadget

a173654a9ae946cf1cfed77ea782c02d.jpg

Researchers at National Institute of Standards and Technology have developed a tactile graphic display to help visually impaired people to perceive images.

From the NIST press release

ELIA Life Technology Inc. of New York, N.Y., licensed for commercialization both the tactile graphic display device and fingertip graphic reader developed by NIST researchers. The former, first introduced as a prototype in 2002, allows a person to feel a succession of images on a reusable surface by raising some 3,600 small pins (actuator points) into a pattern that can be locked in place, read by touch and then reset to display the next graphic in line. Each image-from scanned illustrations, Web pages, electronic books or other sources-is sent electronically to the reader where special software determines how to create a matching tactile display. (For more information, see "NIST 'Pins' Down Imaging System for the Blind".)

An array of about 100 small, very closely spaced (1/10 of a millimeter apart) actuator points set against a user's fingertip is the key to the more recently created "tactile graphic display for localized sensory stimulation." To "view" a computer graphic with this technology, a blind or visually impaired person moves the device-tipped finger across a surface like a computer mouse to scan an image in computer memory. The computer sends a signal to the display device and moves the actuators against the skin to "translate" the pattern, replicating the sensation of the finger moving over the pattern being displayed. With further development, the technology could possibly be used to make fingertip tactile graphics practical for virtual reality systems or give a detailed sense of touch to robotic control (teleoperation) and space suit gloves.

 

Press release: NIST Licenses Systems to Help the Blind 'See' Images

As soon as the bat met the ball, I knew it was gone

"As soon as the bat met the ball, I knew it was gone": outcome prediction, hindsight bias, and the representation and control of action in expert and novice baseball players.

Psychon Bull Rev. 2007 Aug;14(4):669-75

Authors: Gray R, Beilock SL, Carr TH

A virtual-reality batting task compared novice and expert baseball players' ability to predict the outcomes of their swings as well as the susceptibility of these outcome predictions to hindsight bias--a measure of strength and resistance to distortion of memory for predicted action outcomes. During each swing the simulation stopped when the bat met the ball. Batters marked where on the field they thought the ball would land. Correct feedback was then displayed, after which batters attempted to remark the location they had indicated prior to feedback. Expert batters were more accurate than less-skilled individuals in the initial marking and showed less hindsight bias in the postfeedback marking. Furthermore, experts' number of hits in the previous block of trials was positively correlated with prediction accuracy and negatively correlated with hindsight bias. The reverse was true for novices. Thus the ability to predict the outcome of one's performance before such information is available in the environment is not only based on one's overall skill level, but how one is performing at a given moment.

High loads induce differences between actual and imagined movement duration

High loads induce differences between actual and imagined movement duration.

Exp Brain Res. 2007 Nov 1;

Authors: Slifkin AB

Actual and imagined action may be governed by common information and neural processes. This hypothesis has found strong support from a range of chronometric studies showing that it takes the same amount of time to actually move and to imagine moving. However, exceptions have been observed when actual and imagined movements were made under conditions of inertial loading: sometimes the equivalency of actual and imagined movement durations (MDs) has been preserved, and other times it has been disrupted. The purpose of the current study was to test the hypothesis that the appearance and magnitude of actual-imagined MD differences in those studies was dependent on the level of load relative to the maximum loading capacity of the involved effector system [the maximum voluntary load (MVL)]. The experiment required 12 young, healthy humans to actually produce, and to imagine producing, single degree of freedom index finger movements under a range of loads (0, 5, 10, 20, 40, and 80% MVL). As predicted, statistically significant actual-imagined MD differences were absent at lower loads (0-20% MVL), but differences appeared and increased in magnitude with further increases in %MVL (40 and 80% MVL). That pattern of results may relate to the common, everyday experience individuals have in interacting with loads. Participants are likely to have extensive experience interacting with very low loads, but not high loads. It follows that the control of low inertial loads should be governed by complete central representations of action, while representations should be less complete for high loads. A consequence may be increases in the uncertainty of predicting motor output with increases in load. Compensation for the increased uncertainty may appear as increases in the MD values selected during both the preparation and imagery of action-according to a speed-uncertainty trade-off. Then, during actual action, MD may be reduced if movement-related feedback indicates that a faster movement would succeed.

Using movement imagery and electromyography-triggered feedback in stroke rehabilitation

Effects of movement imagery and electromyography-triggered feedback on arm hand function in stroke patients in the subacute phase.

Clin Rehabil. 2007 Jul;21(7):587-94

Authors: Hemmen B, Seelen HA

OBJECTIVE: To investigate the effects of movement imagery-assisted electromyography (EMG)-triggered feedback (focused on paretic wrist dorsiflexors) on the arm-hand function of stroke patients. DESIGN: Single-blinded, longitudinal, multicentre randomized controlled trial. Measurements were performed (on average) 54 days post stroke (baseline), three months later (post training) and at 12 months post baseline. SETTING: Two rehabilitation centres. SUBJECTS: Twenty-seven patients with a first-ever, ischaemic, subacute stroke. INTERVENTIONS: A reference group received conventional electrostimulation, while the experimental group received arm-hand function training based on EMG-triggered feedback combined with movement imagery. Both groups were trained for three months, 5 days/week, 30 minutes/day, in addition to their therapy as usual. MAIN MEASURES: Arm-hand function was evaluated using the upper extremity-related part of the Brunnstrom Fugl-Meyer test and the Action Research Arm test. RESULTS: During training, Brunnstrom Fugl-Meyer scores improved 8.7 points and Action Research Arm scores by 19.4 points (P < 0.0001) in both groups relative to baseline results, rising to 13.3 and 28.4 points respectively at one year follow-up (P < 0.0001). No between-group differences were found at any time. CONCLUSIONS: EMG-triggered feedback stimulation did not lead to more arm-hand function improvement relative to conventional electrostimulation. However, in contrast to many clinical reports, a significant improvement was still observed in both groups nine months after treatment ceased.

The use of videotape feedback after stroke

Motor learning and the use of videotape feedback after stroke.

Top Stroke Rehabil. 2007 Sep-Oct;14(5):28-36

Authors: Gilmore PE, Spaulding SJ

BACKGROUND: Efforts have been made to apply motor learning theories to the rehabilitation of individuals following stroke. Motor learning poststroke has not been well investigated in the literature. This research attempted to fill the gap regarding motor learning applied to practice. PURPOSE: This two-group research study attempted to determine the effectiveness of an experimental therapy combining videotape feedback with occupational therapy compared to only occupational therapy in learning the motor skill of donning socks and shoes after stroke. METHOD: Ten participants were randomly assigned to one of the two groups and all participants were videotaped during pretest and up to 10 treatment sessions aimed at donning socks and shoes. Only one group viewed their videotape replay. The acquisition of donning socks and shoes was measured using the socks and shoes subtests of the Klein-Bell Activities of Daily Living Scale and their scores on the Canadian Occupational Performance Measure. RESULTS: There was no significant difference between the two groups and both groups improved. However, the group that received videotape feedback thought they performed better and were more satisfied with their ability to don shoes, lending support for the use of videotape feedback poststroke to improve satisfaction with performance.