Ok

By continuing your visit to this site, you accept the use of cookies. These ensure the smooth running of our services. Learn more.

Sep 20, 2010

XWave: Control your iPhone with your brain

The XWave is a new technology that uses a single electrode placed on the wearer’s forehead to measure electroencephalography (EEG) data, and converts these analog signals into digital so they can be used to control an external device. The XWave comes bundled with a software that includes a number of brain-training exercises. These include levitating a ball on the iDevice’s screen, changing a color based on the relaxation level of your brain and training your brain to maximize its attention span.

 

In the company’s own words:

XWave, powered by NeuroSky eSense patented technologies, senses the faintest electrical impulses transmitted through your skull to the surface of your forehead and converts these analogue signals into digital. With XWave, you will be able to detect attention and meditation levels, as well as train your mind to control things. Objects in a game can be controlled, lights in your living room can change colour depending on your mood; the possibilities are limited to only the power of your imagination.





The interesting feature is that the company is even serving up their APIs so developers can design and develop apps using the XWave device. The company reports that some apps already in development include games in which objects are controlled by the wearer’s mind and another that allows the wearer to control the lights in their home or select music based on their mood. You can order an XWave for $US100; it ships on November 1.


Dec 20, 2009

Head Chaise: Couching One's Thoughts into a Brain Wave Sofa

From Scientific American

Two European designers, Dries Verbruggen and Lucas Maassen used their alpha waves as a source of inspiration for their design work, which resulted in a piece of furniture, the Brain Wave Couch.

“The process is a wink to a rather futuristic design process,” the couch creators wrote in a press release, “for which a designer merely has to close his or her eyes, or merely rest, to have the brain do all the work, and create the data needed to have the CNC machine cut the shape of the sofa.”

The x-axis of the couch represents Maassen’s brain waves in hertz, while the y-axis shows the amount of alpha activity as a percentage, and the z-axis is the time in milliseconds. Once the foam core of the sofa was completed, the designers covered it by hand in soft gray felt and decorated the valleys of the brain waves with buttons.

The Brainwave Sofa was presented at the Bits ‘n Pieces Exhibition in New York.

eeg_sofa.jpg

Nov 17, 2009

Post-doc position Brain-Computer Interface

A post-doc position (funded for 3+ years) is available immediately in the field of Brain-Computer Interface/Neural Engineering research.

The successful candidate will be part of the Brain-Computer Interface (BCI) Research and Development Program at the Wadsworth Center in Albany, New York.  The research will primarily involve the use of signals recorded from the surface of the brain (electrocorticography (ECoG)) in humans to decode specific aspects of human cognition or behavior from ECoG signals, and to use  these signals for communication or control. The goal of this project is to build a system that extracts and uses these decoded signals in real time.

This real-time implementation will be based on our BCI2000 system (http://www.bci2000.org), which has become the standard in the field of BCI research and has already been provided to about 500 laboratories around the world.

Required expertise is a solid background in signal processing, in particular time series/spectral analyses, classification, and machine learning, and excellent programming expertise in Matlab. Please do not apply if you do not have this expertise. Additional desired expertise is a solid understanding of neuroscience, in particular in ECoG signals related to attentional/intentional/motor systems, and programming experience in C/C++.

Applicants should send a CV, a brief statement of background and goals, and two reference letters to Dr. Gerwin Schalk (http://www.wadsworth.org/resnres/bios/schalk.htm) at schalk@wadsworth.org. Review of applications will start immediately and continue until the position is filled.

The BCI program at the Wadsworth Center is recognized world-wide for its EEG-and ECoG-based BCI research. The Wadsworth Center has been named one of the "Best Places to Work for Postdocs" and one of the "Best Places to Work in Academia" by The Scientist magazine.

Contacts:

Gerwin Schalk, Ph.D.
Research Scientist V
Wadsworth Center, NYS Dept. of Health
Dept. of Neurology, Albany Medical College
Dept. of Neurosurgery, Washington Univ. in St. Louis
Dept. of Biomed. Eng., Rensselaer Polytechnic Institute
Dept. of Biomed. Sci., State Univ. of New York at Albany
C650 Empire State Plaza
Albany, New York 12201
phone (518) 486-2559
fax (518) 486-4910
e-mail schalk@wadsworth.org

Oct 02, 2009

Natural weelchair control

Have a look at this demo of an electric wheelchair under control of an Emotiv EEG/EMG headset. The control system developed by Cuitech, detects when the user winks or smiles, and translates these signals into commands to control the wheelchair.

Jul 06, 2009

Thought-controlled wheelchairs

Via Sentient Development

The BSI-Toyota Collaboration Center (BTCC) is developing a wheelchair that can be navigated in real-time with brain waves. The brain-controlled device can adjust itself to the characteristics of each individual user, thereby improving the efficiency with which it senses the driver's commands. That way, the driver is able to get the system to learn his/her commands (forward/right/left) quickly and efficiently; the system boasts an accuracy rate of 95%.

Jun 24, 2009

Neurofeedback-based motor imagery training for brain-computer interface

Neurofeedback-based motor imagery training for brain-computer interface (BCI).

J Neurosci Methods. 2009 Apr 30;179(1):150-6

Authors: Hwang HJ, Kwon K, Im CH

In the present study, we propose a neurofeedback-based motor imagery training system for EEG-based brain-computer interface (BCI). The proposed system can help individuals get the feel of motor imagery by presenting them with real-time brain activation maps on their cortex. Ten healthy participants took part in our experiment, half of whom were trained by the suggested training system and the others did not use any training. All participants in the trained group succeeded in performing motor imagery after a series of trials to activate their motor cortex without any physical movements of their limbs. To confirm the effect of the suggested system, we recorded EEG signals for the trained group around sensorimotor cortex while they were imagining either left or right hand movements according to our experimental design, before and after the motor imagery training. For the control group, we also recorded EEG signals twice without any training sessions. The participants' intentions were then classified using a time-frequency analysis technique, and the results of the trained group showed significant differences in the sensorimotor rhythms between the signals recorded before and after training. Classification accuracy was also enhanced considerably in all participants after motor imagery training, compared to the accuracy before training. On the other hand, the analysis results for the control EEG data set did not show consistent increment in both the number of meaningful time-frequency combinations and the classification accuracy, demonstrating that the suggested system can be used as a tool for training motor imagery tasks in BCI applications. Further, we expect that the motor imagery training system will be useful not only for BCI applications, but for functional brain mapping studies that utilize motor imagery tasks as well.

Jun 09, 2009

Neurofeedback-based motor imagery training for brain-computer interface

Neurofeedback-based motor imagery training for brain-computer interface (BCI).

J Neurosci Methods. 2009 Apr 30;179(1):150-156

Authors: Hwang HJ, Kwon K, Im CH

In the present study, we propose a neurofeedback-based motor imagery training system for EEG-based brain-computer interface (BCI). The proposed system can help individuals get the feel of motor imagery by presenting them with real-time brain activation maps on their cortex. Ten healthy participants took part in our experiment, half of whom were trained by the suggested training system and the others did not use any training. All participants in the trained group succeeded in performing motor imagery after a series of trials to activate their motor cortex without any physical movements of their limbs. To confirm the effect of the suggested system, we recorded EEG signals for the trained group around sensorimotor cortex while they were imagining either left or right hand movements according to our experimental design, before and after the motor imagery training. For the control group, we also recorded EEG signals twice without any training sessions. The participants' intentions were then classified using a time-frequency analysis technique, and the results of the trained group showed significant differences in the sensorimotor rhythms between the signals recorded before and after training. Classification accuracy was also enhanced considerably in all participants after motor imagery training, compared to the accuracy before training. On the other hand, the analysis results for the control EEG data set did not show consistent increment in both the number of meaningful time-frequency combinations and the classification accuracy, demonstrating that the suggested system can be used as a tool for training motor imagery tasks in BCI applications. Further, we expect that the motor imagery training system will be useful not only for BCI applications, but for functional brain mapping studies that utilize motor imagery tasks as well.

May 25, 2009

4th XVR Workshop & Joint PRESENCCIA and SKILLS PhD Symposium

PRESENCCIA and SKILLS are two integrated projects that both aim to advance Virtual Reality technology. These projects are highly interdisciplinary encompassing, among others, computer science, robotics, engineering, interaction design, cognitive science, neuroscience, psychology and philosophy. All these fields, however diverse their interests, come together in the goal of integrating human interaction in mixed and virtual reality environments in order to enhance the user’s experience and enabling him to act and interact in a natural and familiar way by means of enactive paradigms.

The most interesting, challenging and useful digital environments are social, focussing on supporting (group) interaction between real people and other remote people or real people and virtual people. The aim is to understand, track and give appropriate feedback in verbal, non-verbal and implicit interactions while also making digital content more believable and intelligent.

Likewise, a number of methods need to be developed allowing users of virtual environments to not only perform actions effectively in a variety of different scenarios but also be able to choose from a repertoire of suitable actions. This requires adequate digital representations of human skills and also techniques to capture, interpret and deliver them by means of multimodal interfaces, robotics and virtual environments within enactive interaction paradigms.

At the low-level end of the spectrum we also aim to understand the neural basis of presence and its response. Its enhancement and its application is the fundamental object of study from many different points of view, and including visual, haptic and auditory modalities.

To participate to the Workshop, please register on line:
http://www.percro.org/registrationXVR2009/

Keynote Speakers

Salvatore Maria Aglioti, Psychology Department , Università di Roma "La Sapienza” and IRCCS Fondazione Santa Lucia, Roma (http://w3.uniroma1.it/aglioti/) ù


Flesh made Soul: Bodies in the Brain.

Talking about the body implies talking about the very “special object” that allows a deep interconnection between the ability to have self-consciousness and the ability to experience a world of objects. My talk will be based on the studies in healthy and brain damaged subjects we performed in the past fifteen years on the neural representation of the body. I will put forward the idea that, although trivially made of flesh, blood and bones, the body can be considered the “psychic object” par excellence, which mediates and implements a variety of complex functions, ranging from the notion of self to social interactions and negotiations.

Jan Peters, Dept. Empirical Inference, Max-Planck-Institute for Biological Cybernetics, Tuebingen, Germany (http://www-clmc.usc.edu/~jrpeters/)

Towards Motor Skill Learning in Robotics.

Autonomous robots that can assist humans in situations of daily life have been a long standing vision of robotics, artificial intelligence,
and cognitive sciences. A first step towards this goal is to create robots that can learn tasks triggered by environmental context or higher level instruction. However, learning techniques have yet to live up to  this promise as only few methods manage to scale to high-dimensional manipulator or humanoid robots. In this talk, we investigate a general framework suitable for learning motor skills in robotics which is based on the principles behind many analytical robotics approaches. We propose new, task-appropriate architectures, such as the Natural Actor-Critic and the PoWER algorithm.
and cognitive sciences. A first step towards this goal is to create robots that can learn tasks triggered by environmental context or higher level instruction. However, learning techniques have yet to live up to  this promise as only few methods manage to scale to high-dimensional manipulator or humanoid robots. In this talk, we investigate a general framework suitable for learning motor skills in robotics which is based on the principles behind many analytical robotics approaches. We propose new, task-appropriate architectures, such as the Natural Actor-Critic and the PoWER algorithm.

Feb 16, 2009

Improving the performance of brain-computer interface through meditation

Improving the performance of brain-computer interface through meditation practicing.

Conf Proc IEEE Eng Med Biol Soc. 2008;1:662-5

Authors: Eskandari P, Erfanian A

Cognitive tasks using motor imagery have been used for generating and controlling EEG activity in most brain-computer interface (BCI). Nevertheless, during the performance of a particular mental task, different factors such as concentration, attention, level of consciousness and the difficulty of the task, may be affecting the changes in the EEG activity. Accordingly, training the subject to consistently and reliably produce and control the changes in the EEG signals is a critical issue in developing a BCI system. In this work, we used meditation practice to enhance the mind controllability during the performance of a mental task in a BCI system. The mental states to be discriminated are the imaginative hand movement and the idle state. The experiments were conducted on two groups of subject, meditation group and control group. The time-frequency analysis of EEG signals for meditation practitioners showed an event-related desynchronization (ERD) of beta rhythm before imagination during resting state. In addition, a strong event-related synchronization (ERS) of beta rhythm was induced in frequency around 25 Hz during hand motor imagery. The results demonstrated that the meditation practice can improve the classification accuracy of EEG patterns. The average classification accuracy was 88.73% in the meditation group, while it was 70.28% in the control group. An accuracy as high as 98.0% was achieved in the meditation group.

Jan 20, 2009

Functional network reorganization during learning in a brain-computer interface paradigm

Functional network reorganization during learning in a brain-computer interface paradigm.

Proc Natl Acad Sci U S A. 2008 Dec 1;

Authors: Jarosiewicz B, Chase SM, Fraser GW, Velliste M, Kass RE, Schwartz AB

Efforts to study the neural correlates of learning are hampered by the size of the network in which learning occurs. To understand the importance of learning-related changes in a network of neurons, it is necessary to understand how the network acts as a whole to generate behavior. Here we introduce a paradigm in which the output of a cortical network can be perturbed directly and the neural basis of the compensatory changes studied in detail. Using a brain-computer interface, dozens of simultaneously recorded neurons in the motor cortex of awake, behaving monkeys are used to control the movement of a cursor in a three-dimensional virtual-reality environment. This device creates a precise, well-defined mapping between the firing of the recorded neurons and an expressed behavior (cursor movement). In a series of experiments, we force the animal to relearn the association between neural firing and cursor movement in a subset of neurons and assess how the network changes to compensate. We find that changes in neural activity reflect not only an alteration of behavioral strategy but also the relative contributions of individual neurons to the population error signal.

Nov 04, 2008

Brain Controlled Cell Phones

Via Textually.org

NeuroSky Inc, a venture company based in San Jose, Calif, prototyped a system that reads brain waves with a sensor and uses them for mobile phone applications.

neurosky.gif

 

 

 

 

 

 

 

Software algorithms try to deduce from your brainwaves what you are thinking and pass on the appropriate commands to the cell phone.

 

Jul 08, 2008

New BCI system for gaming applications

Emotiv Systems has developed a new brain computer interface headset for video games and other uses. Emotiv’s president Tan Le claims that the headset will be on sale around the end of this year ($299).

Mar 03, 2008

Brain-computer interfaces in the continuum of consciousness

Brain-computer interfaces in the continuum of consciousness.

Curr Opin Neurol. 2007 Dec;20(6):643-9

Authors: Kübler A, Kotchoubey B

PURPOSE OF REVIEW: To summarize recent developments and look at important future aspects of brain-computer interfaces. RECENT FINDINGS: Recent brain-computer interface studies are largely targeted at helping severely or even completely paralysed patients. The former are only able to communicate yes or no via a single muscle twitch, and the latter are totally nonresponsive. Such patients can control brain-computer interfaces and use them to select letters, words or items on a computer screen, for neuroprosthesis control or for surfing the Internet. This condition of motor paralysis, in which cognition and consciousness appear to be unaffected, is traditionally opposed to nonresponsiveness due to disorders of consciousness. Although these groups of patients may appear to be very alike, numerous transition states between them are demonstrated by recent studies. SUMMARY: All nonresponsive patients can be regarded on a continuum of consciousness which may vary even within short time periods. As overt behaviour is lacking, cognitive functions in such patients can only be investigated using neurophysiological methods. We suggest that brain-computer interfaces may provide a new tool to investigate cognition in disorders of consciousness, and propose a hierarchical procedure entailing passive stimulation, active instructions, volitional paradigms, and brain-computer interface operation.

Jan 23, 2008

Using brain-computer communication to navigate virtual environments

Brain-computer communication: motivation, aim, and impact of exploring a virtual apartment.

IEEE Trans Neural Syst Rehabil Eng. 2007 Dec;15(4):473-82

Authors: Leeb R, Lee F, Keinrath C, Scherer R, Bischof H, Pfurtscheller G

The step away from a synchronized or cue-based brain-computer interface (BCI) and from laboratory conditions towards real world applications is very important and crucial in BCI research. This work shows that ten naive subjects can be trained in a synchronous paradigm within three sessions to navigate freely through a virtual apartment, whereby at every junction the subjects could decide by their own, how they wanted to explore the virtual environment (VE). This virtual apartment was designed similar to a real world application, with a goal-oriented task, a high mental workload, and a variable decision period for the subject. All subjects were able to perform long and stable motor imagery over a minimum time of 2 s. Using only three electroencephalogram (EEG) channels to analyze these imaginations, we were able to convert them into navigation commands. Additionally, it could be demonstrated that motivation is a very crucial factor in BCI research; motivated subjects perform much better than unmotivated ones.

Jan 09, 2008

Brain–Computer Communication: Motivation, Aim, and Impact of Exploring a Virtual Apartment

Neural Systems and Rehabilitation Engineering, IEEE Transactions on [see also IEEE Trans. on Rehabilitation Engineering]

Leeb, R.   Lee, F.   Keinrath, C.   Scherer, R.   Bischof, H.   Pfurtscheller, G.  

Publication Date: Dec. 2007
Volume: 15,  Issue: 4
On page(s): 473-482
ISSN: 1534-4320

 
 

The step away from a synchronized or cue-based brain–computer interface (BCI) and from laboratory conditions towards real world applications is very important and crucial in BCI research. This work shows that ten naive subjects can be trained in a synchronous paradigm within three sessions to navigate freely through a virtual apartment, whereby at every junction the subjects could decide by their own, how they wanted to explore the virtual environment (VE). This virtual apartment was designed similar to a real world application, with a goal-oriented task, a high mental workload, and a variable decision period for the subject. All subjects were able to perform long and stable motor imagery over a minimum time of 2 s. Using only three electroencephalogram (EEG) channels to analyze these imaginations, we were able to convert them into navigation commands. Additionally, it could be demonstrated that motivation is a very crucial factor in BCI research; motivated subjects perform much better than unmotivated ones.

Dec 22, 2007

Towards an independent brain-computer interface using steady state visual evoked potentials

Towards an independent brain-computer interface using steady state visual evoked potentials.

Clin Neurophysiol. 2007 Dec 10;

Authors: Allison BZ, McFarland DJ, Schalk G, Zheng SD, Jackson MM, Wolpaw JR

OBJECTIVE: Brain-computer interface (BCI) systems using steady state visual evoked potentials (SSVEPs) have allowed healthy subjects to communicate. However, these systems may not work in severely disabled users because they may depend on gaze shifting. This study evaluates the hypothesis that overlapping stimuli can evoke changes in SSVEP activity sufficient to control a BCI. This would provide evidence that SSVEP BCIs could be used without shifting gaze. METHODS: Subjects viewed a display containing two images that each oscillated at a different frequency. Different conditions used overlapping or non-overlapping images to explore dependence on gaze function. Subjects were asked to direct attention to one or the other of these images during each of 12 one-minute runs. RESULTS: Half of the subjects produced differences in SSVEP activity elicited by overlapping stimuli that could support BCI control. In all remaining users, differences did exist at corresponding frequencies but were not strong enough to allow effective control. CONCLUSIONS: The data demonstrate that SSVEP differences sufficient for BCI control may be elicited by selective attention to one of two overlapping stimuli. Thus, some SSVEP-based BCI approaches may not depend on gaze control. The nature and extent of any BCI's dependence on muscle activity is a function of many factors, including the display, task, environment, and user. SIGNIFICANCE: SSVEP BCIs might function in severely disabled users unable to reliably control gaze. Further research with these users is necessary to explore the optimal parameters of such a system and validate online performance in a home environment.

Dec 16, 2007

Towards an independent brain-computer interface using steady state visual evoked potentials

Towards an independent brain-computer interface using steady state visual evoked potentials.

Clin Neurophysiol. 2007 Dec 10;

Authors: Allison BZ, McFarland DJ, Schalk G, Zheng SD, Jackson MM, Wolpaw JR

OBJECTIVE: Brain-computer interface (BCI) systems using steady state visual evoked potentials (SSVEPs) have allowed healthy subjects to communicate. However, these systems may not work in severely disabled users because they may depend on gaze shifting. This study evaluates the hypothesis that overlapping stimuli can evoke changes in SSVEP activity sufficient to control a BCI. This would provide evidence that SSVEP BCIs could be used without shifting gaze. METHODS: Subjects viewed a display containing two images that each oscillated at a different frequency. Different conditions used overlapping or non-overlapping images to explore dependence on gaze function. Subjects were asked to direct attention to one or the other of these images during each of 12 one-minute runs. RESULTS: Half of the subjects produced differences in SSVEP activity elicited by overlapping stimuli that could support BCI control. In all remaining users, differences did exist at corresponding frequencies but were not strong enough to allow effective control. CONCLUSIONS: The data demonstrate that SSVEP differences sufficient for BCI control may be elicited by selective attention to one of two overlapping stimuli. Thus, some SSVEP-based BCI approaches may not depend on gaze control. The nature and extent of any BCI's dependence on muscle activity is a function of many factors, including the display, task, environment, and user. SIGNIFICANCE: SSVEP BCIs might function in severely disabled users unable to reliably control gaze. Further research with these users is necessary to explore the optimal parameters of such a system and validate online performance in a home environment.

Dec 08, 2007

Self-initiation of EEG-based brain-computer communication using the heart rate response

Self-initiation of EEG-based brain-computer communication using the heart rate response.

J Neural Eng. 2007 Dec;4(4):L23-9

Authors: Scherer R, Müller-Putz GR, Pfurtscheller G

Self-initiation, that is the ability of a brain-computer interface (BCI) user to autonomously switch on and off the system, is a very important issue. In this work we analyze whether the respiratory heart rate response, induced by brisk inspiration, can be used as an additional communication channel. After only 20 min of feedback training, ten healthy subjects were able to self-initiate and operate a 4-class steady-state visual evoked potential-based (SSVEP) BCI by using one bipolar ECG and one bipolar EEG channel only. Threshold detection was used to measure a beat-to-beat heart rate increase. Despite this simple method, during a 30 min evaluation period on average only 2.9 non-intentional switches (heart rate changes) were detected.

Nov 25, 2007

Brain2Robot

 
 
Researchers at the Fraunhofer Institute for Computer Architecture and Software Technology FIRST and the Charite hospital in Berlin have developed a new EEG-controlled robot arm, which might one day bring help to people with paralysis.
 
Electrodes attached to the patient's scalp measure the brain's electrical signals, which are amplified and transmitted to a computer. Highly efficient algorithms analyze these signals using a self-learning technique. The software is capable of detecting changes in brain activity that take place even before a movement is carried out. It can recognize and distinguish between the patterns of signals that correspond to an intention to raise the left or right hand, and extract them from the pulses being fired by millions of other neurons in the brain. These neural signal patterns are then converted into control instructions for the computer. "The aim of the project is to help people with severe motor disabilities to carry out everyday tasks. The advantage of our technology is that it is capable of translating an intended action directly into instructions for the computer," says team leader Florin Popescu. The Brain2Robot project has been granted around 1.3 million euros in research funding under the EU's sixth Framework Programme (FP6). Its focus lies on developing medical applications, in particular control systems for prosthetics, personal robots and wheelchairs. The researchers have also developed a "thought-controlled typewriter", a communication device that enables severely paralyzed patients to pick out letters of the alphabet and write texts. The robot arm could be ready for commercialization in just a few years' time.

 

 

Press release:Brain2Robot

Project page:Brain2Robot

Nov 18, 2007

Virtual reality hardware and graphic display options for brain-machine interfaces

Virtual reality hardware and graphic display options for brain-machine interfaces.

J Neurosci Methods. 2007 Sep 29;

Authors: Marathe AR, Carey HL, Taylor DM

Virtual reality hardware and graphic displays are reviewed here as a development environment for brain-machine interfaces (BMIs). Two desktop stereoscopic monitors and one 2D monitor were compared in a visual depth discrimination task and in a 3D target-matching task where able-bodied individuals used actual hand movements to match a virtual hand to different target hands. Three graphic representations of the hand were compared: a plain sphere, a sphere attached to the fingertip of a realistic hand and arm, and a stylized pacman-like hand. Several subjects had great difficulty using either stereo monitor for depth perception when perspective size cues were removed. A mismatch in stereo and size cues generated inappropriate depth illusions. This phenomenon has implications for choosing target and virtual hand sizes in BMI experiments. Target-matching accuracy was about as good with the 2D monitor as with either 3D monitor. However, users achieved this accuracy by exploring the boundaries of the hand in the target with carefully controlled movements. This method of determining relative depth may not be possible in BMI experiments if movement control is more limited. Intuitive depth cues, such as including a virtual arm, can significantly improve depth perception accuracy with or without stereo viewing.

1 2 3 4 Next