Ok

By continuing your visit to this site, you accept the use of cookies. These ensure the smooth running of our services. Learn more.

Nov 04, 2007

As soon as the bat met the ball, I knew it was gone

"As soon as the bat met the ball, I knew it was gone": outcome prediction, hindsight bias, and the representation and control of action in expert and novice baseball players.

Psychon Bull Rev. 2007 Aug;14(4):669-75

Authors: Gray R, Beilock SL, Carr TH

A virtual-reality batting task compared novice and expert baseball players' ability to predict the outcomes of their swings as well as the susceptibility of these outcome predictions to hindsight bias--a measure of strength and resistance to distortion of memory for predicted action outcomes. During each swing the simulation stopped when the bat met the ball. Batters marked where on the field they thought the ball would land. Correct feedback was then displayed, after which batters attempted to remark the location they had indicated prior to feedback. Expert batters were more accurate than less-skilled individuals in the initial marking and showed less hindsight bias in the postfeedback marking. Furthermore, experts' number of hits in the previous block of trials was positively correlated with prediction accuracy and negatively correlated with hindsight bias. The reverse was true for novices. Thus the ability to predict the outcome of one's performance before such information is available in the environment is not only based on one's overall skill level, but how one is performing at a given moment.

High loads induce differences between actual and imagined movement duration

High loads induce differences between actual and imagined movement duration.

Exp Brain Res. 2007 Nov 1;

Authors: Slifkin AB

Actual and imagined action may be governed by common information and neural processes. This hypothesis has found strong support from a range of chronometric studies showing that it takes the same amount of time to actually move and to imagine moving. However, exceptions have been observed when actual and imagined movements were made under conditions of inertial loading: sometimes the equivalency of actual and imagined movement durations (MDs) has been preserved, and other times it has been disrupted. The purpose of the current study was to test the hypothesis that the appearance and magnitude of actual-imagined MD differences in those studies was dependent on the level of load relative to the maximum loading capacity of the involved effector system [the maximum voluntary load (MVL)]. The experiment required 12 young, healthy humans to actually produce, and to imagine producing, single degree of freedom index finger movements under a range of loads (0, 5, 10, 20, 40, and 80% MVL). As predicted, statistically significant actual-imagined MD differences were absent at lower loads (0-20% MVL), but differences appeared and increased in magnitude with further increases in %MVL (40 and 80% MVL). That pattern of results may relate to the common, everyday experience individuals have in interacting with loads. Participants are likely to have extensive experience interacting with very low loads, but not high loads. It follows that the control of low inertial loads should be governed by complete central representations of action, while representations should be less complete for high loads. A consequence may be increases in the uncertainty of predicting motor output with increases in load. Compensation for the increased uncertainty may appear as increases in the MD values selected during both the preparation and imagery of action-according to a speed-uncertainty trade-off. Then, during actual action, MD may be reduced if movement-related feedback indicates that a faster movement would succeed.

Using movement imagery and electromyography-triggered feedback in stroke rehabilitation

Effects of movement imagery and electromyography-triggered feedback on arm hand function in stroke patients in the subacute phase.

Clin Rehabil. 2007 Jul;21(7):587-94

Authors: Hemmen B, Seelen HA

OBJECTIVE: To investigate the effects of movement imagery-assisted electromyography (EMG)-triggered feedback (focused on paretic wrist dorsiflexors) on the arm-hand function of stroke patients. DESIGN: Single-blinded, longitudinal, multicentre randomized controlled trial. Measurements were performed (on average) 54 days post stroke (baseline), three months later (post training) and at 12 months post baseline. SETTING: Two rehabilitation centres. SUBJECTS: Twenty-seven patients with a first-ever, ischaemic, subacute stroke. INTERVENTIONS: A reference group received conventional electrostimulation, while the experimental group received arm-hand function training based on EMG-triggered feedback combined with movement imagery. Both groups were trained for three months, 5 days/week, 30 minutes/day, in addition to their therapy as usual. MAIN MEASURES: Arm-hand function was evaluated using the upper extremity-related part of the Brunnstrom Fugl-Meyer test and the Action Research Arm test. RESULTS: During training, Brunnstrom Fugl-Meyer scores improved 8.7 points and Action Research Arm scores by 19.4 points (P < 0.0001) in both groups relative to baseline results, rising to 13.3 and 28.4 points respectively at one year follow-up (P < 0.0001). No between-group differences were found at any time. CONCLUSIONS: EMG-triggered feedback stimulation did not lead to more arm-hand function improvement relative to conventional electrostimulation. However, in contrast to many clinical reports, a significant improvement was still observed in both groups nine months after treatment ceased.

The use of videotape feedback after stroke

Motor learning and the use of videotape feedback after stroke.

Top Stroke Rehabil. 2007 Sep-Oct;14(5):28-36

Authors: Gilmore PE, Spaulding SJ

BACKGROUND: Efforts have been made to apply motor learning theories to the rehabilitation of individuals following stroke. Motor learning poststroke has not been well investigated in the literature. This research attempted to fill the gap regarding motor learning applied to practice. PURPOSE: This two-group research study attempted to determine the effectiveness of an experimental therapy combining videotape feedback with occupational therapy compared to only occupational therapy in learning the motor skill of donning socks and shoes after stroke. METHOD: Ten participants were randomly assigned to one of the two groups and all participants were videotaped during pretest and up to 10 treatment sessions aimed at donning socks and shoes. Only one group viewed their videotape replay. The acquisition of donning socks and shoes was measured using the socks and shoes subtests of the Klein-Bell Activities of Daily Living Scale and their scores on the Canadian Occupational Performance Measure. RESULTS: There was no significant difference between the two groups and both groups improved. However, the group that received videotape feedback thought they performed better and were more satisfied with their ability to don shoes, lending support for the use of videotape feedback poststroke to improve satisfaction with performance.

Oct 27, 2007

LevelHead

Re-blogged from Networked Performance

 

levelhead.jpg

 

levelHead is an interactive game that uses a cube, a webcam, and pattern recognition. When the cube is rotated or tilted in front of the camera the user will be able to see ‘inside’ the cube and guide a small avatar through six different rooms.

Pattern recognition has already been used in several other projects, but this is a new way of using it, and a new way of thinking of the technology. The idea behind the game itself is rather simple. When the cube is tilted the avatar moves in the corresponding direction. The goal of the game is to guide him through a maze of rooms connected by doors, and lead him to the outside world.

According to the creater, Julian Oliver, the game is currently in development, but will be released as open-source soon.

Check out the explanatory video.

 

Effects of natural stress relief meditation on trait anxiety

Effects of natural stress relief meditation on trait anxiety: a pilot study.

Psychol Rep. 2007 Aug;101(1):130-4

Authors: Coppola F

Natural Stress Relief meditation, a mental technique which is practiced for 15 minutes twice a day, aims to reduce stress and anxiety by eliciting a specific state of physiological rest along with mental alertness. The meditation is taught in a self-administered program, requiring one hour of training during the first three days, followed by the regular twice daily practice. Each 15-min. session consists in sitting quietly with closed eyes while applying a specific mental procedure. To test the effectiveness of meditation in reducing trait anxiety, Spielberger's State-Trait Anxiety Inventory was administered to 25 participants four times over a 3-wk. period: one week before starting to practice the meditation, a few hours before starting, 1 wk. after, and 2 wk. after. The difference in Trait Anxiety score between pretreatment and before starting the practice was not significant, while it was significant both after the first week of practice (Cohen d=.46) and after the first 2 wk. of practice (d=.67).

Pocket Supercomputer

 
Researchers at Accenture Technology Labs in France have developed the "Pocket Supercomputer," which automatically identifies objects in a video, using any ordinary 3G cellphone equipped with a video camera
 
 
 
 

15:21 Posted in Research tools | Permalink | Comments (0) | Tags: data mining

Oct 25, 2007

Darkness-enhanced startle responses in ecologically valid environments

Darkness-enhanced startle responses in ecologically valid environments: A virtual tunnel driving experiment.

Biol Psychol. 2007 Sep 14;

Authors: Mühlberger A, Wieser MJ, Pauli P

Using the startle reflex methodology, researchers have shown that darkness, a phylogenetically relevant aversive context for humans, elicits fear responses. The present study replicated these findings in an ecologically valid situation, a virtual tunnel drive. Furthermore, the study focused on the question whether the darkness-enhanced startle response is modulated by an additional task involvement of the participants. Startle responses were assessed during virtual tunnel drives with darker and brighter sections. Participants once actively drove the virtual car and once passively sat in the car as a passenger. We found more negative feelings during darker parts of the virtual tunnel and during active driving. However, facilitated startle reactions in darkness were restricted to passive drives. Furthermore, correlation analyses revealed that darkness-enhanced startle modulation was more pronounced in participants with lower state anxiety. These results extend earlier findings in an experimental paradigm using ecologically valid virtual environments. Further research should use virtual reality paradigms to address context-dependent research questions.

Japanese android recognizes and uses body language

Via Pink Tentacle

NICT develops robot with nonverbal communication skills --

 

Japan’s National Institute of Information and Communications Technology (NICT) researchers have developed an autonomous humanoid robot that can recognize and use body language. According to the press release, the android can use nonverbal communication skills such as gestures and touch to facilitate natural interaction with humans. NICT researchers envision future applications of this technology in robots that can work in the home or assist with rescue operations when disaster strikes.

 

NICT press release (japanese)

Oct 24, 2007

Movement imagery increases pain in people with neuropathic pain following complete thoracic spinal cord injury

Movement imagery increases pain in people with neuropathic pain following complete thoracic spinal cord injury.

Pain. 2007 Oct 15;

Authors: Gustin SM, Wrigley PJ, Gandevia SC, Middleton JW, Henderson LA, Siddall PJ

Spinal cord injury (SCI) results in deafferentation and the onset of neuropathic pain in a substantial proportion of people. Based on evidence suggesting motor cortex activation results in attenuation of neuropathic pain, we sought to determine whether neuropathic SCI pain could be modified by imagined movements of the foot. Fifteen subjects with a complete thoracic SCI (7 with below-level neuropathic pain and 8 without pain) were instructed in the use of movement imagery. Movement imagery was practiced three times daily for 7days. On the eighth day, subjects performed the movement imagery in the laboratory and recorded pain ratings during the period of imagined movement. Six out of 7 subjects with neuropathic pain reported an increase in pain during imagined movements from 2.9+/-0.7 during baseline to 5.0+/-1.0 during movement imagery (p<0.01). In SCI subjects without neuropathic pain, movement imagery evoked an increase in non-painful sensation intensity from a baseline of 1.9+/-0.7 to 4.8+/-1.3 during the movement imagery (p<0.01). Two subjects without a history of pain or non-painful phantom sensations had onset of dysesthesia while performing imagined movements. This study reports exacerbation of pain in response to imagined movements and it contrasts with reports of reduction pain in people with peripheral neuropathic pain. The potential mechanisms underlying this sensory enhancement with movement imagery are discussed.

Oct 22, 2007

DARPA next generation prosthetic arm

The Boston Globe has an article by Scott Kirsner about the next generation of prosthetic limbs under development at DARPA


caf3bcb3064af02420aec0fb0c067cdd.jpg

From the article: 

Without any covering to emulate human skin - what those in the prosthetics field call a "cosmesis" - the arm is distinctly robotic, all metal cylinders and dark gray carbon fiber. I held out my index finger, and Van Der Merwe manipulated the arm so that the index finger and thumb grabbed my fingertip and squeezed lightly. Suddenly, there was a buzzing sound. "That's a sensor in the fingers letting me know how hard I'm squeezing," Van Der Merwe explained. A few minutes later, when I shook hands, the grip was firm (if not warm), and Van Der Merwe chided me for not shaking more vigorously. I didn't want to break the hand and get on the Pentagon's bad side

 

Check the video 

Oct 21, 2007

AjaxLife

A new application called AjaxLife allows browser-based interaction with the online virtual world SecondLife.

Using AjaxLife, users do not need a video card and a graphical client to connect to the SecondLife world. However, the AjaxLife client supports only limited functions, such as in-game chatting, teleporting to various locations and checking the status of the user's inventory, friends and Linden Dollars, and there is no representation of in-game avatars.

The web application was developed by Katharine Barry, a fifteen (!) year old English schoolgirl.

 

23:45 Posted in Virtual worlds | Permalink | Comments (0) | Tags: second life

Oct 20, 2007

fMRI Analysis of Neural Mechanisms Underlying Rehabilitation in VR

fMRI Analysis of Neural Mechanisms Underlying Rehabilitation in Virtual Reality: Activating Secondary Motor Areas.

Conf Proc IEEE Eng Med Biol Soc. 2006;1:3692-3695

Authors: August K, Lewis JA, Chandar G, Merians A, Biswal B, Adamovich S

A pilot functional MRI study on a control subject investigated the possibility of inducing increased neural activations in primary, as well as secondary motor areas through virtual reality-based exercises of the hand. These areas are known to be important in effective motor output in stroke patients with impaired corticospinal systems. We found increased activations in these brain areas during hand exercises in VR when compared to vision of non-anthropomorphic shapes. Further studies are needed to investigate the potential of virtual reality-based rehabilitation for tapping into the properties of the mirror neuron system to stimulate plasticity in sensorimotor areas.

A Low Cost Human Computer Interface based on Eye Tracking

A Low Cost Human Computer Interface based on Eye Tracking.

Conf Proc IEEE Eng Med Biol Soc. 2006;1:3226-3229

Authors: Hiley JB, Redekopp AH, Fazel-Rezai R

This paper describes the implementation of a human computer interface based on eye tracking. Current commercially available systems exist, but have limited use due mainly to their large cost. The system described in this paper was designed to be a low cost and unobtrusive. The technique was video-oculography assisted by corneal reflections. An off-the shelf CCD webcam was used to capture images. The images were analyzed in software to extract key features of the eye. The users gaze point was then calculated based on the relative position of these features. The system is capable of calculating eye-gaze in real-time to provide a responsive interaction. A throughput of eight gaze points per second was achieved. The accuracy of the fixations based on the calculated eye-gazes were within 1 cm of the on-screen gaze location. By developing a low-cost system, this technology is made accessible to a wider range of applications.

My Map: Email visualization

Re-blogged from Information Aesthetics

email_circle.jpg

My Map is a data visualization application capable of rendering the relationships between the user & individuals in the address book by examining the TO:, FROM:, and CC: fields of every email in the 60,000-large email archive.

the intensity of the relationship is determined by the color intensity of the line. "My Map" allows to explore different relational groupings & periods of time, revealing the temporal ebbs & flows in various relationships. My Map thus becomes a veritable self-portrait, a visual reflection of personal associations.

 

link: christopherbaker.net

Computational Arts and Creative Informatics

Via Networked Performance

oscillon4.jpg


 
 
CALL FOR CHAPTERS
 
Proposals Deadline: November 15, 2007
Full Articles Due: February 28, 2008.

At the core of the fundamental questions of “what is art” and “what is technology” we focus The Handbook of Research on computational Arts and Creative Informatics at the convergence of computer science and artistic creativity. We seek to discover new ways to conceptualize art and investigate new methods of self expression through the use of technology. Here we are inviting experts in a wide range of disciplines to discuss the emergence of expression and art through that of science, information technology, computer science, artificial intelligence and many other related fields. We see this book as a comprehensive recourse where artists and scientists can collaborate on ideas, practices and experimentations on these topics. As technology becomes meshed further into our culture and everyday lives, new mediums and outlets for creative expression and innovation are abound. We are emphasizing the creative nature of technology and science itself. How does the human side of technological achievement influence our creative abilities as technology is a creation in itself? Has the ontology of the information age influenced society at the level of both the human and non human? Through this handbook we are addressing novel concepts from creation, interaction, communication, interpretation and emergence of art through various technological means and mediums.

The Handbook of Research on computational Arts and Creative Informatics will provide a comprehensive range of topics regarding the interaction of the sciences and the arts. Key concepts, theories and innovations involving the computational side of the arts and other creative technologies will be discussed. This particular volume will feature chapters (8,000-10,000 words) written by experts and leading innovators of their respective fields.

Recommended topics include, **but are not limited to**, the following:

+Essays and Discussion on Art and Technology
+Art and web design
+Fractals, tessellations and Creativity through mathematical expression
+Interactive and computational sculptures and artworks
+Kinetic sculptures
+Creativity as an emergent property of art and science
+Digital art and creative expressions
+The creative process in IT education
+Art created by Artificial Life and Intelligent Agents
+Creativity in computer interface and web design
+Creativity from emergent properties
+Art expressed or created by multi-agent systems
+Virtual spaces and Art of synthetic/ virtual worlds
+Art and expression through information visualization
+Animation, simulation and modeling
+3-D artwork
+Art for the blind and visually impaired- Universal Creativity
+Human expression through cybernetics
+Robotics and Art
+Future trends in Art as influenced by emerging technologies

Submissions: Interested individuals are invited to submit a 1-2 page manuscript on their proposed chapter by November 15, 2007. We encourage the inclusion of related topics not mentioned above that may be related to both the theme of the handbook and your particular research area or expertise. Upon acceptance of your proposal, you will have until February 28, 2008 to submit a completed draft of your chapter.

A set of guidelines will also be sent to you upon acceptance. Each potential author will receive notification on their acceptance status by November 30, 2007.


A wrist-mounted instrument for measuring psychosocial stress

Via AATP Interactive 

ewatch.jpg

 

 

 

 

 

 

 

 

Researchers from Carnegie Mellon University and the University of Pittsburgh are investigating psychosocial stress exposure during the course of daily life using an instrument called eWatch, a multisensor package about the size of a large wristwatch. The eWatch can sense sound, motion, ambient light, skin temperature and other factors that provide clues about the wearer’s location, health status and current activity. 

From the news release

Every 45 minutes over the course of five days, the eWatch will prompt wearers to take part in a 2-to-3-minute interview. The instrument will record their response to questions about their current activities, such as “Working hard?” and “Working fast?” By the end of the study, several hundred people will have tested the eWatch.

Previous research has shown that responses to such interviews help predict who will show higher rates of plaque development in the arteries, a risk factor for heart attack or stroke. Using interviews in real time allows researchers to quantify how stressors affect one’s daily life, as well as to pinpoint when these effects begin and when they end.

Use of the eWatch technology should assist researchers in finding the optimal method for responding to such interviews during daily activities, whether by pressing a button, moving the wrist or speaking into a wireless ear bug device. Environmental data collected by the eWatch also may assist the researchers in characterizing the types of environments people find most stressful, so that their location, such as home or work, may be recorded automatically.

“We want to capture a slice of life in people’s daily routine,” says Kamarck. “We hope that these new tools will allow us to do so while minimizing disruptions imposed by the act of measurement.”

 

Short-term meditation training improves attention and self-regulation

Short-term meditation training improves attention and self-regulation.

Proc Natl Acad Sci U S A. 2007 Oct 11;

Authors: Tang YY, Ma Y, Wang J, Fan Y, Feng S, Lu Q, Yu Q, Sui D, Rothbart MK, Fan M, Posner MI

Recent studies suggest that months to years of intensive and systematic meditation training can improve attention. However, the lengthy training required has made it difficult to use random assignment of participants to conditions to confirm these findings. This article shows that a group randomly assigned to 5 days of meditation practice with the integrative body-mind training method shows significantly better attention and control of stress than a similarly chosen control group given relaxation training. The training method comes from traditional Chinese medicine and incorporates aspects of other meditation and mindfulness training. Compared with the control group, the experimental group of 40 undergraduate Chinese students given 5 days of 20-min integrative training showed greater improvement in conflict scores on the Attention Network Test, lower anxiety, depression, anger, and fatigue, and higher vigor on the Profile of Mood States scale, a significant decrease in stress-related cortisol, and an increase in immunoreactivity. These results provide a convenient method for studying the influence of meditation training by using experimental and control methods similar to those used to test drugs or other interventions.

A brain-computer interface with vibrotactile biofeedback for haptic information

A brain-computer interface with vibrotactile biofeedback for haptic information.

J Neuroengineering Rehabil. 2007 Oct 17;4(1):40

Authors: Chatterjee A, Aggarwal V, Ramos A, Acharya S, Thakor NV

ABSTRACT: BACKGROUND: It has been suggested that Brain-Computer Interfaces (BCI) may one day be suitable for controlling a neuroprosthesis. For closed-loop operation of BCI, a tactile feedback channel that is compatible with neuroprosthetic applications is desired. Operation of an EEG-based BCI using only vibrotactile feedback, a commonly used method to convey haptic senses of contact and pressure, is demonstrated with a high level of accuracy. METHODS: A Mu-rhythm based BCI using a motor imagery paradigm was used to control the position of a virtual cursor. The cursor position was shown visually as well as transmitted haptically by modulating the intensity of a vibrotactile stimulus to the upper limb. A total of six subjects operated the BCI in a two-stage targeting task, receiving only vibrotactile biofeedback of performance. The location of the vibration was also systematically varied between the left and right arms to investigate location-dependent effects on performance. RESULTS AND CONCLUSIONS: Subjects are able to control the BCI using only vibrotactile feedback with an average accuracy of 56% and as high as 72%. These accuracies are significantly higher than the 15% predicted by random chance if the subject had no voluntary control of their Mu-rhythm. The results of this study demonstrate that vibrotactile feedback is an effective biofeedback modality to operate a BCI using motor imagery. In addition, the study shows that placement of the vibrotactile stimulation on the biceps ipsilateral or contralateral to the motor imagery introduces a significant bias in the BCI accuracy. This bias is consistent with a drop in performance generated by stimulation of the contralateral limb. Users demonstrated the capability to overcome this bias with training.

Interactive Multimodal Biofeedback System for Neurorehabilitation

Novel Design of Interactive Multimodal Biofeedback System for Neurorehabilitation.

Conf Proc IEEE Eng Med Biol Soc. 2006;1:4925-4928

Authors: Huang H, Chen Y, Xu W, Sundaram H, Olson L, Ingalls T, Rikakis T, He J

A previous design of a biofeedback system for Neurorehabilitation in an interactive multimodal environment has demonstrated the potential of engaging stroke patients in task-oriented neuromotor rehabilitation. This report explores the new concept and alternative designs of multimedia based biofeedback systems. In this system, the new interactive multimodal environment was constructed with abstract presentation of movement parameters. Scenery images or pictures and their clarity and orientation are used to reflect the arm movement and relative position to the target instead of the animated arm. The multiple biofeedback parameters were classified into different hierarchical levels w.r.t. importance of each movement parameter to performance. A new quantified measurement for these parameters were developed to assess the patient's performance both real-time and offline. These parameters were represented by combined visual and auditory presentations with various distinct music instruments. Overall, the objective of newly designed system is to explore what information and how to feedback information in interactive virtual environment could enhance the sensorimotor integration that may facilitate the efficient design and application of virtual environment based therapeutic intervention.