By continuing your visit to this site, you accept the use of cookies. These ensure the smooth running of our services. Learn more.

May 07, 2012

CFP – Brain Computer Interfaces Grand Challenge 2012


(From the CFP website)

Sensors, such as wireless EEG caps, that provide us with information about the brain activity are becoming available for use outside the medical domain. As in the case of physiological sensors information derived from these sensors can be used – as an information source for interpreting the user’s activity and intentions. For example, a user can use his or her brain activity to issue commands by using motor imagery. But this control-oriented interaction is unreliable and inefficient compared to other available interaction modalities. Moreover a user needs to behave as almost paralyzed (sit completely still) to generate artifact-free brain activity which can be recognized by the Brain-Computer Interface (BCI).

Of course BCI systems are improving in various ways; improved sensors, better recognition techniques, software that is more usable, natural, and context aware, hybridization with physiological sensors and other communication systems. New applications arise at the horizon and are explored, such as motor recovery and entertainment. Testing and validation with target users in home settings is becoming more common. These and other developments are making BCIs increasingly practical for conventional users (persons with severe motor disabilities) as well as non-disabled users. But despite this progress BCIs remain, as a control interface, quite limited in real world settings. BCIs are slow and unreliable, particularly over extended periods with target users. BCIs require expert assistance in many ways; a typical end user today needs help to identify, buy, setup, configure, maintain, repair and upgrade the BCI. User-centered design is underappreciated, with BCIs meeting the goals and abilities of the designer rather than user. Integration in the daily lives of people is just beginning. One of the reasons why this integration is problematic is due to view point of BCI as control device; mainly due to the origin of BCI as a control mechanism for severely physical disabled people.

In this challenge (organised within the framework of the Call for Challenges at ICMI 2012), we propose to change this view point and therefore consider BCI as an intelligent sensor, similar to a microphone or camera, which can be used in multimodal interaction. A typical example is the use of BCI in sonification of brain signals is the exposition Staalhemel created by Christoph de Boeck. Staalhemel is an interactive installation with 80 steel segments suspended over the visitor’s head as he walks through the space. Tiny hammers tap rhythmic patterns on the steel plates, activated by the brainwaves of the visitor who wears a portable BCI (EEG scanner). Thus, visitors are directly interacting with their surroundings, in this case a artistic installation.

The main challenges to research and develop BCIs as intelligent sensors include but are not limited to:

  • How could BCIs as intelligent sensors be integrated in multimodal HCI, HRI and HHI applications alongside other modes of input control?
  • What constitutes appropriate categories of adaptation (to challenge, to help, to promote positive emotion) in response to physiological data?
  • What are the added benefits of this approach with respect to user experience of HCI, HRI and HHI with respect to performance, safety and health?
  • How to present the state of the user in the context of HCI or HRI (representation to a machine) compared to HHI (representation to the self or another person)?
  • How to design systems that promote trust in the system and protect the privacy of the user?
  • What constitutes opportune support for BCI based intelligent sensor? In other words, how can the interface adapt to the user information such that the user feels supported rather than distracted?
  • What is the user experience of HCI, HRI and HHI enhanced through BCIs as intelligent sensors?
  • What are the ethical, legal and societal implications of such technologies? And how can we address these issues timely?

We solicit papers, demonstrators, videos or design descriptions of possible demonstrators that address the above challenges. Demonstrators and videos should be accompanied by a paper explaining the design. Descriptions of possible demonstrators can be presented through a poster.
Accepted papers will be included in the ICMI conference proceedings, which will be published by ACM as part of their series of International Conference Proceedings. As such the ICMI proceedings will have an ISBN number assigned to it and all papers will have a unique DOI and URL assigned to them. Moreover, all accepted papers will be included in the ACM digital library.

Important dates

Deadline for submission: June 15, 2012
Notification of acceptance: July 7, 2012
Final paper: August 15, 2102

Grand Challenge Website:




Mind-controlled robot allows a quadriplegic patient moving virtually in space

Researchers at Federal Institute of Technology in Lausanne, Switzerland (EPFL), have successfully demonstrated a robot controlled by the mind of a partially quadriplegic patient in a hospital 62 miles away. The EPFL brain-computer interface system does not require invasive neural implants in the brain, since it is based on a special EEG cap fitted with electrodes that record the patient’s neural signals. The task of the patient is to imagine moving his paralyzed fingers, and this input is than translated by the BCI system into command for the robot.

Apr 20, 2012

Mental workload during brain-computer interface training

Mental workload during brain-computer interface training.

Ergonomics. 2012 Apr 16;

Authors: Felton EA, Williams JC, Vanderheiden GC, Radwin RG

Abstract. It is not well understood how people perceive the difficulty of performing brain-computer interface (BCI) tasks, which specific aspects of mental workload contribute the most, and whether there is a difference in perceived workload between participants who are able-bodied and disabled. This study evaluated mental workload using the NASA Task Load Index (TLX), a multi-dimensional rating procedure with six subscales: Mental Demands, Physical Demands, Temporal Demands, Performance, Effort, and Frustration. Able-bodied and motor disabled participants completed the survey after performing EEG-based BCI Fitts' law target acquisition and phrase spelling tasks. The NASA-TLX scores were similar for able-bodied and disabled participants. For example, overall workload scores (range 0-100) for 1D horizontal tasks were 48.5 (SD = 17.7) and 46.6 (SD 10.3), respectively. The TLX can be used to inform the design of BCIs that will have greater usability by evaluating subjective workload between BCI tasks, participant groups, and control modalities. Practitioner Summary: Mental workload of brain-computer interfaces (BCI) can be evaluated with the NASA Task Load Index (TLX). The TLX is an effective tool for comparing subjective workload between BCI tasks, participant groups (able-bodied and disabled), and control modalities. The data can inform the design of BCIs that will have greater usability.

Mar 31, 2012

Barriers to and mediators of brain-computer interface user acceptance

Barriers to and mediators of brain-computer interface user acceptance: focus group findings.

Ergonomics. 2012 Mar 29

Authors: Blain-Moraes S, Schaff R, Gruis KL, Huggins JE, Wren PA

Abstract. Brain-computer interfaces (BCI) are designed to enable individuals with severe motor impairments such as amyotrophic lateral sclerosis (ALS) to communicate and control their environment. A focus group was conducted with individuals with ALS (n=8) and their caregivers (n=9) to determine the barriers to and mediators of BCI acceptance in this population. Two key categories emerged: personal factors and relational factors. Personal factors, which included physical, physiological and psychological concerns, were less important to participants than relational factors, which included corporeal, technological and social relations with the BCI. The importance of these relational factors was analysed with respect to published literature on actor-network theory (ANT) and disability, and concepts of voicelessness and personhood. Future directions for BCI research are recommended based on the emergent focus group themes. Practitioner Summary: This manuscript explores human factor issues involved in designing and evaluating brain-computer interface (BCI) systems for users with severe motor disabilities. Using participatory research paradigms and qualitative methods, this work draws attention to personal and relational factors that act as barriers to, or mediators of, user acceptance of this technology.

Mar 11, 2012

Augmenting cognition: old concept, new tools

The increasing miniaturization and computing power of information technology devices allow new ways of interaction between human brains and computers, progressively blurring the boundaries between man and machine. An example is provided by brain-computer interface systems, which allow users to use their brain to control the behavior of a computer or of an external device such as a robotic arm (in this latter case, we speak of “neuroprostetics”).


The idea of using information technologies to augment cognition, however, is not new, dating back in 1950’s and 1960’s. One of the first to write about this concept was british psychiatrist William Ross Ashby.

In his Introduction to Cybernetics (1956), he described intelligence as the “power of appropriate selection,” which could be amplified by means of technologies in the same way that physical power is amplified. A second major conceptual contribution towards the development of cognitive augmentation was provided few years later by computer scientist and Internet pioneer Joseph Licklider, in a paper entitled Man-Computer Symbiosis (1960).

In this article, Licklider envisions the development of computer technologies that will enable users “to think in interaction with a computer in the same way that you think with a colleague whose competence supplements your own.” According to his vision, the raise of computer networks would allow to connect together millions of human minds, within a “'thinking center' that will incorporate the functions of present-day libraries together with anticipated advances in information storage and retrieval.” This view represent a departure from the prevailing Artificial Intelligence approach of that time: instead of creating an artificial brain, Licklider focused on the possibility of developing new forms of interaction between human and information technologies, with the aim of extending human intelligence.

A similar view was proposed in the same years by another computer visionnaire, Douglas Engelbart, in its famous 1962 article entitled Augmenting Human Intellect: A Conceptual Framework.

In this report, Engelbart defines the goal of intelligence augmentation as “increasing the capability of a man to approach a complex problem situation, to gain comprehension to suit his particular needs, and to derive solutions to problems. Increased capability in this respect is taken to mean a mixture of the following: more-rapid comprehension, better comprehension, the possibility of gaining a useful degree of comprehension in a situation that previously was too complex, speedier solutions, better solutions, and the possibility of finding solutions to problems that before seemed insoluble (…) We do not speak of isolated clever tricks that help in particular situations.We refer to away of life in an integrated domain where hunches, cut-and-try, intangibles, and the human ‘feel for a situation’ usefully co-exist with powerful concepts, streamlined terminology and notation, sophisticated methods, and high-powered electronic aids.”

These “electronic aids” nowdays include any kind of harware and software computing devices used i.e. to store information in external memories, to process complex data, to perform routine tasks and to support decision making. However, today the concept of cognitive augmentation is not limited to the amplification of human intellectual abilities through external hardware. As recently noted by Nick Bostrom and Anders Sandberg (Sci Eng Ethics 15:311–341, 2009), “What is new is the growing interest in creating intimate links between the external systems and the human user through better interaction. The software becomes less an external tool and more of a mediating ‘‘exoself’’. This can be achieved through mediation, embedding the human within an augmenting ‘‘shell’’ such as wearable computers (…) or virtual reality, or through smart environments in which objects are given extended capabilities” (p. 320).

At the forefront of this trend is neurotechnology, an emerging research and development field which includes technologies that are specifically designed with the aim of improving brain function. Examples of neurotechnologies include brain training games such as BrainAge and programs like Fast ForWord, but also neurodevices used to monitor or regulate brain activity, such as deep brain stimulators (DBS), and smart prosthetics for the replacement of impaired sensory systems (i.e. cochlear or retinal implants).

Clearly, the vision of neurotechnology is not free of issues. The more they become powerful and sophisticated, the more attention should be dedicated to understand the socio-economic, legal and ethical implications of their applications in various field, from medicine to neuromarketing.


Jun 05, 2011

Human Computer Confluence

Human Computer Confluence (HC-CO) is an ambitious initiative recently launched by the European Commission under the Future and Emerging Technologies (FET) program, which fosters projects that investigate and demonstrate new possibilities “emerging at the confluence between the human and technological realms” (source: HC-CO website, EU Commission).

Such projects will examine new modalities for individual and group perception, actions and experience in augmented, virtual spaces. In particular, such virtual spaces would span the virtual reality continuum, also extending to purely synthetic but believable representation of massive, complex and dynamic data. HC-CO also fosters inter-disciplinary research (such as Presence, neuroscience, psychophysics, prosthetics, machine learning, computer science and engineering) towards delivering unified experiences and inventing radically new forms of perception/action.

HC-CO brings together ideas stemming from two series of Presence projects (the complete list is available here) with a vision of new forms of interaction and of new types of information spaces to interact with. It will develop the science and technologies necessary to ensure an effective, even transparent, bidirectional communication between humans and computers, which will in turn deliver a huge set of applications: from today's Presence concepts to new senses, to new perceptive capabilities dealing with more abstract information spaces to the social impact of such communication enabling technologies. Inevitably, these technologies question the notion of interface between the human and the technological realm, and thus, also in a fundamental way, put into question the nature of both.

The long-term implications can be profound and need to be considered from an ethical/societal point of view. HC-CO is, however, not a programme on human augmentation. It does not aim to create a super-human. The idea of confluence is to study what can be done by bringing new types of technologically enabled interaction modalities in between the human and a range of virtual (not necessarily naturalistic) realms. Its ambition is to bring our best understanding from human sciences into future and emerging technologies for a new and purposeful human computer symbiosis.

HC-CO is conceptually broken down into the following themes:

  • HC-CO Data. On-line perception and interaction with massive volumes of data: new methods to stimulate and use human sensory perception and cognition to interpret massive volumes of data in real time to enable assimilation, understanding and interaction with informational spaces. Research should find new ways to exploit human factors (sensory, perceptual and cognitive aspects), including the selection of the most effective sensory modalities, for data exploration. Although not explicitly mentioned, non-sensorial pathways, i.e., direct brain to computer and computer to brain communication could be explored.
  • HC-CO Transit. Unified experience, emerging from the unnoticeable transition from physical to augmented or virtual reality: new methods and concepts towards unobtrusive mixed or virtual reality environment (multi-modal displays, tracking systems, virtual representations...), and scenarios to support entirely unobtrusive interaction. Unobtrusiveness also applies to virtual representations, their dynamics, and the feedback received. Here the challenge is both technological and scientific, spanning human cognition, human machine interaction and machine intelligence disciplines.
  • HC-CO Sense. New forms of perception and action: invent and demonstrate new forms of interaction with the real world, virtual models or abstract information by provoking a mapping from an artificial medium to appropriate sensory modalities or brain regions. This research should reinforce data perception and unified experience by augmenting the human interaction capabilities and awareness in virtual spaces.

In sum, HC-CO is an emerging r&d field that holds the potential to revolutionize the way we interact with computers. Standing at the crossroad between cognitive science, computer science and artificial intelligence, HC-CO can provide the cyberpsychology and cybertherapy community with fresh concepts and interesting new tools to apply in both research and clinical domains.

More to explore:

  • HC-CO initiative: The official EU website the HC-CO initiative, which describes the broad objectives of this emerging research field. 
  • HC2 Project: The horizontal character of HC-CO makes it a fascinating and fertile interdisciplinary field, but it can also compromise its growth, with researchers scattered across disciplines and groups worldwide. For this reason a coordination activity promoting discipline connect, identity building and integration while defining future research, education and policy directions at the regional, national, European and international level has been created. This project is HC2, a three-year Coordination Action funded by the FP7 FET Proactive scheme. The consortium will draw on a wide network of researchers and stakeholders to achieve four key objectives: a) stimulate, structure and support the research community, promoting identity building; b) to consolidate research agendas with special attention to the interdisciplinary aspects of HC-CO; c) enhance the Public Understanding of HC-CO and foster the early contact of researchers with high-tech SMEs and other industry players; d) establish guidelines for the definition of new educational curricula to prepare the next generation of HC-CO researchers.
  • CEED Project: Funded by the HC-CO initiative, the Collective Experience of Empathic Data Systems (CEEDs) project aims to develop “novel, integrated technologies to support human experience, analysis and understanding of very large datasets”. CEEDS will develop innovative tools to exploit theories showing that discovery is the identification of patterns in complex data sets by the implicit information processing capabilities of the human brain. Implicit human responses will be identified by the CEEDs system’s analysis of its sensing systems, tuned to users’ bio-signals and non-verbal behaviours. By associating these implicit responses with different features of massive datasets, the CEEDs system will guide users’ discovery of patterns and meaning within the datasets.
  • VERE Project: VERE - Virtual Embodiment and Robotic Re-Embodiment – is another large project funded by the HC-CO initiative, which aims at “dissolving the boundary between the human body and surrogate representations in immersive virtual reality and physical reality”. Dissolving the boundary means that people have the illusion that their surrogate representation is their own body, and act and have thoughts that correspond to this. The work in VERE may be thought of as applied presence research and applied cognitive neuroscience.

May 21, 2011

Brain-controlled bionic hand for ‘elective amputation’ patient

Source: BBC News — May 18, 2011

An Austrian man has voluntarily had his hand amputated so he can be fitted with a bionic hand, which will be controlled by nerve signals in his own arm. The bionic hands, manufactured by the German prosthetics company Otto Bock, can pinch and grasp in response to signals from the brain. The wrist of the prosthesis can be rotated manually using the patient’s other functioning hand.

The patient will control the hand using the same brain signals that previously powered similar movements in the real hand and that will now be picked up by two sensors placed over the skin above nerves in the forearm.

Dec 27, 2010

Brain-computer interface research comes of age: traditional assumptions meet emerging realities

Brain-computer interface research comes of age: traditional assumptions meet emerging realities.

J Mot Behav. 2010 Nov;42(6):351-3

Authors: Wolpaw JR

Brain-computer interfaces (BCIs) could provide important new communication and control options for people with severe motor disabilities. Most BCI research to date has been based on 4 assumptions that: (a) intended actions are fully represented in the cerebral cortex; (b) neuronal action potentials can provide the best picture of an intended action; (c) the best BCI is one that records action potentials and decodes them; and (d) ongoing mutual adaptation by the BCI user and the BCI system is not very important. In reality, none of these assumptions is presently defensible. Intended actions are the products of many areas, from the cortex to the spinal cord, and the contributions of each area change continually as the CNS adapts to optimize performance. BCIs must track and guide these adaptations if they are to achieve and maintain good performance. Furthermore, it is not yet clear which category of brain signals will prove most effective for BCI applications. In human studies to date, low-resolution electroencephalography-based BCIs perform as well as high-resolution cortical neuron-based BCIs. In sum, BCIs allow their users to develop new skills in which the users control brain signals rather than muscles. Thus, the central task of BCI research is to determine which brain signals users can best control, to maximize that control, and to translate it accurately and reliably into actions that accomplish the users' intentions.

Sep 26, 2010

Change in brain activity through virtual reality-based brain-machine communication in a chronic tetraplegic subject with muscular dystrophy

Change in brain activity through virtual reality-based brain-machine communication in a chronic tetraplegic subject with muscular dystrophy.

BMC Neurosci. 2010 Sep 16;11(1):117

Authors: Hashimoto Y, Ushiba J, Kimura A, Liu M, Tomita Y

ABSTRACT: BACKGROUND: For severely paralyzed people, a brain-computer interface (BCI) provides a way of re-establishing communication. Although subjects with muscular dystrophy (MD) appear to be potential BCI users, the actual long-term effects of BCI use on brain activities in MD subjects have yet to be clarified. To investigate these effects, we followed BCI use by a chronic tetraplegic subject with MD over 5 months. The topographic changes in an electroencephalogram (EEG) after long-term use of the virtual reality (VR)-based BCI were also assessed. Our originally developed BCI system was used to classify an EEG recorded over the sensorimotor cortex in real time and estimate the user's motor intention (MI) in 3 different limb movements: feet, left hand, and right hand. An avatar in the internet-based VR was controlled in accordance with the results of the EEG classification by the BCI. The subject was trained to control his avatar via the BCI by strolling in the VR for 1 hour a day and then continued the same training twice a month at his home. RESULTS: After the training, the error rate of the EEG classification decreased from 40% to 28%. The subject successfully walked around in the VR using only his MI and chatted with other users through a voice-chat function embedded in the internet-based VR. With this improvement in BCI control, event-related desynchronization (ERD) following MI was significantly enhanced (p < 0.01) for feet MI (from -29% to -55%), left-hand MI (from -23% to -42%), and right-hand MI (from -22% to -51%). CONCLUSIONS: These results show that our subject with severe MD was able to learn to control his EEG signal and communicate with other users through use of VR navigation and suggest that an internet-based VR has the potential to provide paralyzed people with the opportunity for easy communication.

Sep 20, 2010

XWave: Control your iPhone with your brain

The XWave is a new technology that uses a single electrode placed on the wearer’s forehead to measure electroencephalography (EEG) data, and converts these analog signals into digital so they can be used to control an external device. The XWave comes bundled with a software that includes a number of brain-training exercises. These include levitating a ball on the iDevice’s screen, changing a color based on the relaxation level of your brain and training your brain to maximize its attention span.


In the company’s own words:

XWave, powered by NeuroSky eSense patented technologies, senses the faintest electrical impulses transmitted through your skull to the surface of your forehead and converts these analogue signals into digital. With XWave, you will be able to detect attention and meditation levels, as well as train your mind to control things. Objects in a game can be controlled, lights in your living room can change colour depending on your mood; the possibilities are limited to only the power of your imagination.

The interesting feature is that the company is even serving up their APIs so developers can design and develop apps using the XWave device. The company reports that some apps already in development include games in which objects are controlled by the wearer’s mind and another that allows the wearer to control the lights in their home or select music based on their mood. You can order an XWave for $US100; it ships on November 1.

Dec 13, 2009

Be a Junior Jedi

USA Today reports about a new device that uses brain waves to allow players to manipulate a sphere within a clear 10-inch-tall training tower, analogous to Yoda and Luke Skywalker's abilities in the Star Wars films. The Force Trainer is expected to be priced at $90 to $100.

jedi mind training toy

Image is from USA Today article


Oct 02, 2009

Natural weelchair control

Have a look at this demo of an electric wheelchair under control of an Emotiv EEG/EMG headset. The control system developed by Cuitech, detects when the user winks or smiles, and translates these signals into commands to control the wheelchair.

Sep 21, 2009

Neurofeedback-based motor imagery training for brain-computer interface

Neurofeedback-based motor imagery training for brain-computer interface (BCI).

J Neurosci Methods. 2009 Apr 30;179(1):150-6

Authors: Hwang HJ, Kwon K, Im CH

In the present study, we propose a neurofeedback-based motor imagery training system for EEG-based brain-computer interface (BCI). The proposed system can help individuals get the feel of motor imagery by presenting them with real-time brain activation maps on their cortex. Ten healthy participants took part in our experiment, half of whom were trained by the suggested training system and the others did not use any training. All participants in the trained group succeeded in performing motor imagery after a series of trials to activate their motor cortex without any physical movements of their limbs. To confirm the effect of the suggested system, we recorded EEG signals for the trained group around sensorimotor cortex while they were imagining either left or right hand movements according to our experimental design, before and after the motor imagery training. For the control group, we also recorded EEG signals twice without any training sessions. The participants' intentions were then classified using a time-frequency analysis technique, and the results of the trained group showed significant differences in the sensorimotor rhythms between the signals recorded before and after training. Classification accuracy was also enhanced considerably in all participants after motor imagery training, compared to the accuracy before training. On the other hand, the analysis results for the control EEG data set did not show consistent increment in both the number of meaningful time-frequency combinations and the classification accuracy, demonstrating that the suggested system can be used as a tool for training motor imagery tasks in BCI applications. Further, we expect that the motor imagery training system will be useful not only for BCI applications, but for functional brain mapping studies that utilize motor imagery tasks as well.

Neurofeedback and brain-computer interface clinical applications

Neurofeedback and brain-computer interface clinical applications.

Int Rev Neurobiol. 2009;86:107-17

Authors: Birbaumer N, Ramos Murguialday A, Weber C, Montoya P

Most of the research devoted to BMI development consists of methodological studies comparing different online mathematical algorithms, ranging from simple linear discriminant analysis (LDA) (Dornhege et al., 2007) to nonlinear artificial neural networks (ANNs) or support vector machine (SVM) classification. Single cell spiking for the reconstruction of hand movements requires different statistical solutions than electroencephalography (EEG)-rhythm classification for communication. In general, the algorithm for BMI applications is computationally simple and differences in classification accuracy between algorithms used for a particular purpose are small. Only a very limited number of clinical studies with neurological patients are available, most of them single case studies. The clinical target populations for BMI-treatment consist primarily of patients with amyotrophic lateral sclerosis (ALS) and severe CNS damage including spinal cord injuries and stroke resulting in substantial deficits in communication and motor function. However, an extensive body of literature started in the 1970s using neurofeedback training. Such training implemented to control various EEG-measures provided solid evidence of positive effects in patients with otherwise pharmacologically intractable epilepsy, attention deficit disorder, and hyperactivity ADHD. More recently, the successful introduction and testing of real-time fMRI and a NIRS-BMI opened an exciting field of interest in patients with psychopathological conditions.

Jul 06, 2009

Thought-controlled wheelchairs

Via Sentient Development

The BSI-Toyota Collaboration Center (BTCC) is developing a wheelchair that can be navigated in real-time with brain waves. The brain-controlled device can adjust itself to the characteristics of each individual user, thereby improving the efficiency with which it senses the driver's commands. That way, the driver is able to get the system to learn his/her commands (forward/right/left) quickly and efficiently; the system boasts an accuracy rate of 95%.

Feb 16, 2009

Improving the performance of brain-computer interface through meditation

Improving the performance of brain-computer interface through meditation practicing.

Conf Proc IEEE Eng Med Biol Soc. 2008;1:662-5

Authors: Eskandari P, Erfanian A

Cognitive tasks using motor imagery have been used for generating and controlling EEG activity in most brain-computer interface (BCI). Nevertheless, during the performance of a particular mental task, different factors such as concentration, attention, level of consciousness and the difficulty of the task, may be affecting the changes in the EEG activity. Accordingly, training the subject to consistently and reliably produce and control the changes in the EEG signals is a critical issue in developing a BCI system. In this work, we used meditation practice to enhance the mind controllability during the performance of a mental task in a BCI system. The mental states to be discriminated are the imaginative hand movement and the idle state. The experiments were conducted on two groups of subject, meditation group and control group. The time-frequency analysis of EEG signals for meditation practitioners showed an event-related desynchronization (ERD) of beta rhythm before imagination during resting state. In addition, a strong event-related synchronization (ERS) of beta rhythm was induced in frequency around 25 Hz during hand motor imagery. The results demonstrated that the meditation practice can improve the classification accuracy of EEG patterns. The average classification accuracy was 88.73% in the meditation group, while it was 70.28% in the control group. An accuracy as high as 98.0% was achieved in the meditation group.

Jan 20, 2009

Functional network reorganization during learning in a brain-computer interface paradigm

Functional network reorganization during learning in a brain-computer interface paradigm.

Proc Natl Acad Sci U S A. 2008 Dec 1;

Authors: Jarosiewicz B, Chase SM, Fraser GW, Velliste M, Kass RE, Schwartz AB

Efforts to study the neural correlates of learning are hampered by the size of the network in which learning occurs. To understand the importance of learning-related changes in a network of neurons, it is necessary to understand how the network acts as a whole to generate behavior. Here we introduce a paradigm in which the output of a cortical network can be perturbed directly and the neural basis of the compensatory changes studied in detail. Using a brain-computer interface, dozens of simultaneously recorded neurons in the motor cortex of awake, behaving monkeys are used to control the movement of a cursor in a three-dimensional virtual-reality environment. This device creates a precise, well-defined mapping between the firing of the recorded neurons and an expressed behavior (cursor movement). In a series of experiments, we force the animal to relearn the association between neural firing and cursor movement in a subset of neurons and assess how the network changes to compensate. We find that changes in neural activity reflect not only an alteration of behavioral strategy but also the relative contributions of individual neurons to the population error signal.

Dec 01, 2008

Brain-machine interface via real-time fMRI

Brain-machine interface via real-time fMRI: Preliminary study on thought-controlled robotic arm.

Neurosci Lett. 2008 Nov 18;

Authors: Lee JH, Ryu J, Jolesz FA, Cho ZH, Yoo SS

Real-time functional MRI (rtfMRI) has been used as a basis for brain-computer interface (BCI) due to its ability to characterize region-specific brain activity in real-time. As an extension of BCI, we present an rtfMRI-based brain-machine interface (BMI) whereby 2-dimensional movement of a robotic arm was controlled by the regulation (and concurrent detection) of regional cortical activations in the primary motor areas. To do so, the subjects were engaged in the right- and/or left-hand motor imagery tasks. The blood oxygenation level dependent (BOLD) signal originating from the corresponding hand motor areas was then translated into horizontal or vertical robotic arm movement. The movement was broadcasted visually back to the subject as a feedback. We demonstrated that real-time control of the robotic arm only through the subjects' thought processes was possible using the rtfMRI-based BMI trials.

Nov 04, 2008

Brain Controlled Cell Phones

Via Textually.org

NeuroSky Inc, a venture company based in San Jose, Calif, prototyped a system that reads brain waves with a sensor and uses them for mobile phone applications.









Software algorithms try to deduce from your brainwaves what you are thinking and pass on the appropriate commands to the cell phone.


Jul 09, 2008

Brain motor system function in a patient with complete spinal cord injury

Brain motor system function in a patient with complete spinal cord injury following extensive brain-computer interface training.

Exp Brain Res. 2008 Jul 1;

Authors: Enzinger C, Ropele S, Fazekas F, Loitfelder M, Gorani F, Seifert T, Reiter G, Neuper C, Pfurtscheller G, Müller-Putz G

Although several features of brain motor function appear to be preserved even in chronic complete SCI, previous functional MRI (fMRI) studies have also identified significant derangements such as a strongly reduced volume of activation, a poor modulation of function and abnormal activation patterns. It might be speculated that extensive motor imagery training may serve to prevent such abnormalities. We here report on a unique patient with a complete traumatic SCI below C5 who learned to elicit electroencephalographic signals beta-bursts in the midline region upon imagination of foot movements. This enabled him to use a neuroprosthesis and to "walk from thought" in a virtual environment via a brain-computer interface (BCI). We here used fMRI at 3T during imagined hand and foot movements to investigate the effects of motor imagery via persistent BCI training over 8 years on brain motor function and compared these findings to a group of five untrained healthy age-matched volunteers during executed and imagined movements. We observed robust primary sensorimotor cortex (SMC) activity in expected somatotopy in the tetraplegic patient upon movement imagination while such activation was absent in healthy untrained controls. Sensorimotor network activation with motor imagery in the patient (including SMC contralateral to and the cerebellum ipsilateral to the imagined side of movement as well as supplementary motor areas) was very similar to the pattern observed with actual movement in the controls. We interpret our findings as evidence that BCI training as a conduit of motor imagery training may assist in maintaining access to SMC in largely preserved somatopy despite complete deafferentation.