Apr 15, 2014

Brain-computer interfaces: a powerful tool for scientific inquiry

Brain-computer interfaces: a powerful tool for scientific inquiry.

Curr Opin Neurobiol. 2014 Apr;25C:70-75

Authors: Wander JD, Rao RP

Abstract. Brain-computer interfaces (BCIs) are devices that record from the nervous system, provide input directly to the nervous system, or do both. Sensory BCIs such as cochlear implants have already had notable clinical success and motor BCIs have shown great promise for helping patients with severe motor deficits. Clinical and engineering outcomes aside, BCIs can also be tremendously powerful tools for scientific inquiry into the workings of the nervous system. They allow researchers to inject and record information at various stages of the system, permitting investigation of the brain in vivo and facilitating the reverse engineering of brain function. Most notably, BCIs are emerging as a novel experimental tool for investigating the tremendous adaptive capacity of the nervous system.

A Hybrid Brain Computer Interface System Based on the Neurophysiological Protocol and Brain-actuated Switch for Wheelchair Control

A Hybrid Brain Computer Interface System Based on the Neurophysiological Protocol and Brain-actuated Switch for Wheelchair Control.

J Neurosci Methods. 2014 Apr 5;

Authors: Cao L, Li J, Ji H, Jiang C

BACKGROUND: Brain Computer Interfaces (BCIs) are developed to translate brain waves into machine instructions for external devices control. Recently, hybrid BCI systems are proposed for the multi-degree control of a real wheelchair to improve the systematical efficiency of traditional BCIs. However, it is difficult for existing hybrid BCIs to implement the multi-dimensional control in one command cycle.
NEW METHOD: This paper proposes a novel hybrid BCI system that combines motor imagery (MI)-based bio-signals and steady-state visual evoked potentials (SSVEPs) to control the speed and direction of a real wheelchair synchronously. Furthermore, a hybrid modalities-based switch is firstly designed to turn on/off the control system of the wheelchair.
RESULTS: Two experiments were performed to assess the proposed BCI system. One was implemented for training and the other one conducted a wheelchair control task in the real environment. All subjects completed these tasks successfully and no collisions occurred in the real wheelchair control experiment.
COMPARISON WITH EXISTING METHOD(S): The protocol of our BCI gave much more control commands than those of previous MI and SSVEP-based BCIs. Comparing with other BCI wheelchair systems, the superiority reflected by the index of path length optimality ratio validated the high efficiency of our control strategy.
CONCLUSIONS: The results validated the efficiency of our hybrid BCI system to control the direction and speed of a real wheelchair as well as the reliability of hybrid signals-based switch control.

Mar 02, 2014

3D Thought controlled environment via Interaxon

In this demo video, artist Alex McLeod shows an environment he designed for Interaxon to use at CES in 2011 interaxon.ca/CES#.

The glasses display the scene in 3D and attaches sensors read users brain-states which control elements of the scene.

3D Thought controlled environment via Interaxon from Alex McLeod on Vimeo.

Dec 24, 2013

Speaking and cognitive distractions during EEG-based brain control of a virtual neuroprosthesis-arm

Speaking and cognitive distractions during EEG-based brain control of a virtual neuroprosthesis-arm.

J Neuroeng Rehabil. 2013 Dec 21;10(1):116

Authors: Foldes ST, Taylor DM

BACKGROUND: Brain-computer interface (BCI) systems have been developed to provide paralyzed individuals the ability to command the movements of an assistive device using only their brain activity. BCI systems are typically tested in a controlled laboratory environment were the user is focused solely on the brain-control task. However, for practical use in everyday life people must be able to use their brain-controlled device while mentally engaged with the cognitive responsibilities of daily activities and while compensating for any inherent dynamics of the device itself. BCIs that use electroencephalography (EEG) for movement control are often assumed to require significant mental effort, thus preventing users from thinking about anything else while using their BCI. This study tested the impact of cognitive load as well as speaking on the ability to use an EEG-based BCI. FINDINGS: Six participants controlled the two-dimensional (2D) movements of a simulated neuroprosthesis-arm under three different levels of cognitive distraction. The two higher cognitive load conditions also required simultaneously speaking during BCI use. On average, movement performance declined during higher levels of cognitive distraction, but only by a limited amount. Movement completion time increased by 7.2%, the percentage of targets successfully acquired declined by 11%, and path efficiency declined by 8.6%. Only the decline in percentage of targets acquired and path efficiency were statistically significant (p < 0.05). CONCLUSION: People who have relatively good movement control of an EEG-based BCI may be able to speak and perform other cognitively engaging activities with only a minor drop in BCI-control performance.

Dec 08, 2013

How to use mind-controlled robots in manufacturing, medicine

via KurzweilAI

University at Buffalo researchers are developing brain-computer interface (BCI) devices to mentally control robots.

“The technology has practical applications that we’re only beginning to explore,” said Thenkurussi “Kesh” Kesavadas, PhD, UB professor of mechanical and aerospace engineering and director of UB’s Virtual Reality Laboratory. “For example, it could help paraplegic patients to control assistive devices, or it could help factory workers perform advanced manufacturing tasks.”

Most BCI research has involved expensive, invasive BCI devices that are inserted into the brain, and used mostly to help disabled people.

UB research relies on a relatively inexpensive ($750), non-invasive external device (Emotiv EPOC). It reads EEG brain activity with 14 sensors and transmits the signal wirelessly to a computer, which then sends signals to the robot to control its movements.

Kesavadas recently demonstrated the technology with Pramod Chembrammel, a doctoral student in his lab.  Chembrammel trained with the instrument for a few days, then used the device to control a robotic arm.

He used the arm to insert a wood peg into a hole and rotate the peg. “It was incredible to see the robot respond to my thoughts,” Chembrammel said. “It wasn’t even that difficult to learn how to use the device.”

The video (below) shows that a simple set of instructions can be combined to execute more complex robotic actions, Kesavadas said. Such robots could be used by factory workers to perform hands-free assembly of products, or carry out tasks like drilling or welding.

The potential advantage, Kesavadas said, is that BCI-controlled devices could reduce the tedium of performing repetitious tasks and improve worker safety and productivity. The devices can also leverage the worker’s decision-making skills, such as identifying a faulty part in an automated assembly line.

Nov 29, 2013

Real-time Neurofeedback Using Functional MRI Could Improve Down-Regulation of Amygdala Activity During Emotional Stimulation

Real-time Neurofeedback Using Functional MRI Could Improve Down-Regulation of Amygdala Activity During Emotional Stimulation: A Proof-of-Concept Study.

Brain Topogr. 2013 Nov 16;

Authors: Brühl AB, Scherpiet S, Sulzer J, Stämpfli P, Seifritz E, Herwig U

Abstract. The amygdala is a central target of emotion regulation. It is overactive and dysregulated in affective and anxiety disorders and amygdala activity normalizes with successful therapy of the symptoms. However, a considerable percentage of patients do not reach remission within acceptable duration of treatment. The amygdala could therefore represent a promising target for real-time functional magnetic resonance imaging (rtfMRI) neurofeedback. rtfMRI neurofeedback directly improves the voluntary regulation of localized brain activity. At present, most rtfMRI neurofeedback studies have trained participants to increase activity of a target, i.e. up-regulation. However, in the case of the amygdala, down-regulation is supposedly more clinically relevant. Therefore, we developed a task that trained participants to down-regulate activity of the right amygdala while being confronted with amygdala stimulation, i.e. negative emotional faces. The activity in the functionally-defined region was used as online visual feedback in six healthy subjects instructed to minimize this signal using reality checking as emotion regulation strategy. Over a period of four training sessions, participants significantly increased down-regulation of the right amygdala compared to a passive viewing condition to control for habilitation effects. This result supports the concept of using rtfMRI neurofeedback training to control brain activity during relevant stimulation, specifically in the case of emotion, and has implications towards clinical treatment of emotional disorders.

Nov 28, 2013

Effect of mindfulness meditation on brain-computer interface performance

Effect of mindfulness meditation on brain-computer interface performance.

Conscious Cogn. 2013 Nov 22;23C:12-21

Authors: Tan LF, Dienes Z, Jansari A, Goh SY

Abstract. Electroencephalogram based Brain-Computer Interfaces (BCIs) enable stroke and motor neuron disease patients to communicate and control devices. Mindfulness meditation has been claimed to enhance metacognitive regulation. The current study explores whether mindfulness meditation training can thus improve the performance of BCI users. To eliminate the possibility of expectation of improvement influencing the results, we introduced a music training condition. A norming study found that both meditation and music interventions elicited clear expectations for improvement on the BCI task, with the strength of expectation being closely matched. In the main 12week intervention study, seventy-six healthy volunteers were randomly assigned to three groups: a meditation training group; a music training group; and a no treatment control group. The mindfulness meditation training group obtained a significantly higher BCI accuracy compared to both the music training and no-treatment control groups after the intervention, indicating effects of meditation above and beyond expectancy effects.

Nov 16, 2013

Monkeys Control Avatar’s Arms Through Brain-Machine Interface

Via Medgadget

Researchers at Duke University have reported in journal Science Translational Medicine that they were able to train monkeys to control two virtual limbs through a brain-computer interface (BCI). The rhesus monkeys initially used joysticks to become comfortable moving the avatar’s arms, but later the brain-computer interfaces implanted on their brains were activated to allow the monkeys to drive the avatar using only their minds. Two years ago the same team was able to train monkeys to control one arm, but the complexity of controlling two arms required the development of a new algorithm for reading and filtering the signals. Moreover, the monkey brains themselves showed great adaptation to the training with the BCI, building new neural pathways to help improve how the monkeys moved the virtual arms. As the authors of the study note in the abstract, “These findings should help in the design of more sophisticated BMIs capable of enabling bimanual motor control in human patients.”

Here’s a video of one of the avatars being controlled to tap on the white balls:

Nov 03, 2013

Neurocam wearable camera reads your brainwaves and records what interests you

Via KurzweilAI.net

The neurocam is the world’s first wearable camera system that automatically records what interests you, based on brainwaves, DigInfo TV reports.

It consists of a headset with a brain-wave sensor and uses the iPhone’s camera to record a 5-second GIF animation. It could also be useful for life-logging.

The algorithm for quantifying brain waves was co-developed by Associate Professor Mitsukura at Keio University.

The project team plans to create an emotional interface.

Aug 07, 2013

Pupil responses allow communication in locked-in syndrome patients

Pupil responses allow communication in locked-in syndrome patients.

Josef Stoll et al., Current Biology, Volume 23, Issue 15, R647-R648, 5 August 2013

For patients with severe motor disabilities, a robust means of communication is a crucial factor for their well-being. We report here that pupil size measured by a bedside camera can be used to communicate with patients with locked-in syndrome. With the same protocol we demonstrate command-following for a patient in a minimally conscious state, suggesting its potential as a diagnostic tool for patients whose state of consciousness is in question. Importantly, neither training nor individual adjustment of our system’s decoding parameters were required for successful decoding of patients’ responses.

Paper full text PDF

dilated pupil Pupils Used to Communicate with People with Locked In Syndrome

Image credit: Flickr user Beth77

May 26, 2013

A Hybrid Brain-Computer Interface-Based Mail Client

A Hybrid Brain-Computer Interface-Based Mail Client.

Comput Math Methods Med. 2013;2013:750934

Authors: Yu T, Li Y, Long J, Li F

Abstract. Brain-computer interface-based communication plays an important role in brain-computer interface (BCI) applications; electronic mail is one of the most common communication tools. In this study, we propose a hybrid BCI-based mail client that implements electronic mail communication by means of real-time classification of multimodal features extracted from scalp electroencephalography (EEG). With this BCI mail client, users can receive, read, write, and attach files to their mail. Using a BCI mouse that utilizes hybrid brain signals, that is, motor imagery and P300 potential, the user can select and activate the function keys and links on the mail client graphical user interface (GUI). An adaptive P300 speller is employed for text input. The system has been tested with 6 subjects, and the experimental results validate the efficacy of the proposed method.

Mar 03, 2013

Brain-to-brain communication between rats achieved

From Duke Medicine News and Communications

Researchers at Duke University Medical Center in the US report in the February 28, 2013 issue of Scientific Reports the successful wiring together of sensory areas in the brains of two rats. The result of the experiment is that one rat will respond to the experiences to which the other is exposed.

Brain-to-brain interface with rats | Duke

The results of these projects suggest the future potential for linking multiple brains to form what the research team is calling an "organic computer," which could allow sharing of motor and sensory information among groups of animals.

"Our previous studies with brain-machine interfaces had convinced us that the rat brain was much more plastic than we had previously thought," said Miguel Nicolelis, M.D., PhD, lead author of the publication and professor of neurobiology at Duke University School of Medicine. "In those experiments, the rat brain was able to adapt easily to accept input from devices outside the body and even learn how to process invisible infrared light generated by an artificial sensor. So, the question we asked was, ‘if the brain could assimilate signals from artificial sensors, could it also assimilate information input from sensors from a different body?’"

To test this hypothesis, the researchers first trained pairs of rats to solve a simple problem: to press the correct lever when an indicator light above the lever switched on, which rewarded the rats with a sip of water. They next connected the two animals' brains via arrays of microelectrodes inserted into the area of the cortex that processes motor information.

One of the two rodents was designated as the "encoder" animal. This animal received a visual cue that showed it which lever to press in exchange for a water reward. Once this “encoder” rat pressed the right lever, a sample of its brain activity that coded its behavioral decision was translated into a pattern of electrical stimulation that was delivered directly into the brain of the second rat, known as the "decoder" animal.

The decoder rat had the same types of levers in its chamber, but it did not receive any visual cue indicating which lever it should press to obtain a reward. Therefore, to press the correct lever and receive the reward it craved, the decoder rat would have to rely on the cue transmitted from the encoder via the brain-to-brain interface.

The researchers then conducted trials to determine how well the decoder animal could decipher the brain input from the encoder rat to choose the correct lever. The decoder rat ultimately achieved a maximum success rate of about 70 percent, only slightly below the possible maximum success rate of 78 percent that the researchers had theorized was achievable based on success rates of sending signals directly to the decoder rat’s brain.

Importantly, the communication provided by this brain-to-brain interface was two-way. For instance, the encoder rat did not receive a full reward if the decoder rat made a wrong choice. The result of this peculiar contingency, said Nicolelis, led to the establishment of a "behavioral collaboration" between the pair of rats.

"We saw that when the decoder rat committed an error, the encoder basically changed both its brain function and behavior to make it easier for its partner to get it right," Nicolelis said. "The encoder improved the signal-to-noise ratio of its brain activity that represented the decision, so the signal became cleaner and easier to detect. And it made a quicker, cleaner decision to choose the correct lever to press. Invariably, when the encoder made those adaptations, the decoder got the right decision more often, so they both got a better reward."

In a second set of experiments, the researchers trained pairs of rats to distinguish between a narrow or wide opening using their whiskers. If the opening was narrow, they were taught to nose-poke a water port on the left side of the chamber to receive a reward; for a wide opening, they had to poke a port on the right side.

The researchers then divided the rats into encoders and decoders. The decoders were trained to associate stimulation pulses with the left reward poke as the correct choice, and an absence of pulses with the right reward poke as correct. During trials in which the encoder detected the opening width and transmitted the choice to the decoder, the decoder had a success rate of about 65 percent, significantly above chance.

To test the transmission limits of the brain-to-brain communication, the researchers placed an encoder rat in Brazil, at the Edmond and Lily Safra International Institute of Neuroscience of Natal (ELS-IINN), and transmitted its brain signals over the Internet to a decoder rat in Durham, N.C. They found that the two rats could still work together on the tactile discrimination task.

"So, even though the animals were on different continents, with the resulting noisy transmission and signal delays, they could still communicate," said Miguel Pais-Vieira, PhD, a postdoctoral fellow and first author of the study. "This tells us that it could be possible to create a workable, network of animal brains distributed in many different locations."

Nicolelis added, "These experiments demonstrated the ability to establish a sophisticated, direct communication linkage between rat brains, and that the decoder brain is working as a pattern-recognition device. So basically, we are creating an organic computer that solves a puzzle."

"But in this case, we are not inputting instructions, but rather only a signal that represents a decision made by the encoder, which is transmitted to the decoder’s brain which has to figure out how to solve the puzzle. So, we are creating a single central nervous system made up of two rat brains,” said Nicolelis.  He pointed out that, in theory, such a system is not limited to a pair of brains, but instead could include a network of brains, or “brain-net.” Researchers at Duke and at the ELS-IINN are now working on experiments to link multiple animals cooperatively to solve more complex behavioral tasks.

"We cannot predict what kinds of emergent properties would appear when animals begin interacting as part of a brain-net. In theory, you could imagine that a combination of brains could provide solutions that individual brains cannot achieve by themselves," continued Nicolelis. Such a connection might even mean that one animal would incorporate another's sense of "self," he said.

"In fact, our studies of the sensory cortex of the decoder rats in these experiments showed that the decoder's brain began to represent in its tactile cortex not only its own whiskers, but the encoder rat's whiskers, too. We detected cortical neurons that responded to both sets of whiskers, which means that the rat created a second representation of a second body on top of its own." Basic studies of such adaptations could lead to a new field that Nicolelis calls the "neurophysiology of social interaction."

Such complex experiments will be enabled by the laboratory's ability to record brain signals from almost 2,000 brain cells at once. The researchers hope to record the electrical activity produced simultaneously by 10-30,000 cortical neurons in the next five years.

Such massive brain recordings will enable more precise control of motor neuroprostheses—such as those being developed by the Walk Again Project—to restore motor control to paralyzed people, Nicolelis said.

More to explore:

Pais-Vieira, M., Lebedev, M., Kunicki, C., Wang, J. & Nicolelis, M. A. L. Sci. Rep. 3, 1319 (2013). PUBMED

Jul 14, 2012

Robot avatar body controlled by thought alone

Via: New Scientist

For the first time, a person lying in an fMRI machine has controlled a robot hundreds of kilometers away using thought alone.

"The ultimate goal is to create a surrogate, like in Avatar, although that’s a long way off yet,” says Abderrahmane Kheddar, director of the joint robotics laboratory at the National Institute of Advanced Industrial Science and Technology in Tsukuba, Japan.

Teleoperated robots, those that can be remotely controlled by a human, have been around for decades. Kheddar and his colleagues are going a step further. “True embodiment goes far beyond classical telepresence, by making you feel that the thing you are embodying is part of you,” says Kheddar. “This is the feeling we want to reach.”

To attempt this feat, researchers with the international Virtual Embodiment and Robotic Re-embodiment project used fMRI to scan the brain of university student Tirosh Shapira as he imagined moving different parts of his body. He attempted to direct a virtual avatar by thinking of moving his left or right hand or his legs.

The scanner works by measuring changes in blood flow to the brain’s primary motor cortex, and using this the team was able to create an algorithm that could distinguish between each thought of movement (see diagram). The commands were then sent via an internet connection to a small robot at the Béziers Technology Institute in France.

The set-up allowed Shapira to control the robot in near real time with his thoughts, while a camera on the robot’s head allowed him to see from the robot’s perspective. When he thought of moving his left or right hand, the robot moved 30 degrees to the left or right. Imagining moving his legs made the robot walk forward.

Read further at: http://www.newscientist.com/article/mg21528725.900-robot-avatar-body-controlled-by-thought-alone.html

Jul 05, 2012

A Real-Time fMRI-Based Spelling Device Immediately Enabling Robust Motor-Independent Communication

A Real-Time fMRI-Based Spelling Device Immediately Enabling Robust Motor-Independent Communication.

Curr Biol. 2012 Jun 27

Authors: Sorger B, Reithler J, Dahmen B, Goebel R

Human communication entirely depends on the functional integrity of the neuromuscular system. This is devastatingly illustrated in clinical conditions such as the so-called locked-in syndrome (LIS) [1], in which severely motor-disabled patients become incapable to communicate naturally-while being fully conscious and awake. For the last 20 years, research on motor-independent communication has focused on developing brain-computer interfaces (BCIs) implementing neuroelectric signals for communication (e.g., [2-7]), and BCIs based on electroencephalography (EEG) have already been applied successfully to concerned patients [8-11]. However, not all patients achieve proficiency in EEG-based BCI control [12]. Thus, more recently, hemodynamic brain signals have also been explored for BCI purposes [13-16]. Here, we introduce the first spelling device based on fMRI. By exploiting spatiotemporal characteristics of hemodynamic responses, evoked by performing differently timed mental imagery tasks, our novel letter encoding technique allows translating any freely chosen answer (letter-by-letter) into reliable and differentiable single-trial fMRI signals. Most importantly, automated letter decoding in real time enables back-and-forth communication within a single scanning session. Because the suggested spelling device requires only little effort and pretraining, it is immediately operational and possesses high potential for clinical applications, both in terms of diagnostics and establishing short-term communication with nonresponsive and severely motor-impaired patients.

May 07, 2012

CFP – Brain Computer Interfaces Grand Challenge 2012

brain-computer-interface-1.jpg

(From the CFP website)

Sensors, such as wireless EEG caps, that provide us with information about the brain activity are becoming available for use outside the medical domain. As in the case of physiological sensors information derived from these sensors can be used – as an information source for interpreting the user’s activity and intentions. For example, a user can use his or her brain activity to issue commands by using motor imagery. But this control-oriented interaction is unreliable and inefficient compared to other available interaction modalities. Moreover a user needs to behave as almost paralyzed (sit completely still) to generate artifact-free brain activity which can be recognized by the Brain-Computer Interface (BCI).

Of course BCI systems are improving in various ways; improved sensors, better recognition techniques, software that is more usable, natural, and context aware, hybridization with physiological sensors and other communication systems. New applications arise at the horizon and are explored, such as motor recovery and entertainment. Testing and validation with target users in home settings is becoming more common. These and other developments are making BCIs increasingly practical for conventional users (persons with severe motor disabilities) as well as non-disabled users. But despite this progress BCIs remain, as a control interface, quite limited in real world settings. BCIs are slow and unreliable, particularly over extended periods with target users. BCIs require expert assistance in many ways; a typical end user today needs help to identify, buy, setup, configure, maintain, repair and upgrade the BCI. User-centered design is underappreciated, with BCIs meeting the goals and abilities of the designer rather than user. Integration in the daily lives of people is just beginning. One of the reasons why this integration is problematic is due to view point of BCI as control device; mainly due to the origin of BCI as a control mechanism for severely physical disabled people.

In this challenge (organised within the framework of the Call for Challenges at ICMI 2012), we propose to change this view point and therefore consider BCI as an intelligent sensor, similar to a microphone or camera, which can be used in multimodal interaction. A typical example is the use of BCI in sonification of brain signals is the exposition Staalhemel created by Christoph de Boeck. Staalhemel is an interactive installation with 80 steel segments suspended over the visitor’s head as he walks through the space. Tiny hammers tap rhythmic patterns on the steel plates, activated by the brainwaves of the visitor who wears a portable BCI (EEG scanner). Thus, visitors are directly interacting with their surroundings, in this case a artistic installation.

The main challenges to research and develop BCIs as intelligent sensors include but are not limited to:

  • How could BCIs as intelligent sensors be integrated in multimodal HCI, HRI and HHI applications alongside other modes of input control?
  • What constitutes appropriate categories of adaptation (to challenge, to help, to promote positive emotion) in response to physiological data?
  • What are the added benefits of this approach with respect to user experience of HCI, HRI and HHI with respect to performance, safety and health?
  • How to present the state of the user in the context of HCI or HRI (representation to a machine) compared to HHI (representation to the self or another person)?
  • How to design systems that promote trust in the system and protect the privacy of the user?
  • What constitutes opportune support for BCI based intelligent sensor? In other words, how can the interface adapt to the user information such that the user feels supported rather than distracted?
  • What is the user experience of HCI, HRI and HHI enhanced through BCIs as intelligent sensors?
  • What are the ethical, legal and societal implications of such technologies? And how can we address these issues timely?

We solicit papers, demonstrators, videos or design descriptions of possible demonstrators that address the above challenges. Demonstrators and videos should be accompanied by a paper explaining the design. Descriptions of possible demonstrators can be presented through a poster.
Accepted papers will be included in the ICMI conference proceedings, which will be published by ACM as part of their series of International Conference Proceedings. As such the ICMI proceedings will have an ISBN number assigned to it and all papers will have a unique DOI and URL assigned to them. Moreover, all accepted papers will be included in the ACM digital library.

Important dates

Deadline for submission: June 15, 2012
Notification of acceptance: July 7, 2012
Final paper: August 15, 2102

Grand Challenge Website:
 

 

 

 

Mind-controlled robot allows a quadriplegic patient moving virtually in space

Researchers at Federal Institute of Technology in Lausanne, Switzerland (EPFL), have successfully demonstrated a robot controlled by the mind of a partially quadriplegic patient in a hospital 62 miles away. The EPFL brain-computer interface system does not require invasive neural implants in the brain, since it is based on a special EEG cap fitted with electrodes that record the patient’s neural signals. The task of the patient is to imagine moving his paralyzed fingers, and this input is than translated by the BCI system into command for the robot.

Apr 20, 2012

Mental workload during brain-computer interface training

Mental workload during brain-computer interface training.

Ergonomics. 2012 Apr 16;

Authors: Felton EA, Williams JC, Vanderheiden GC, Radwin RG

Abstract. It is not well understood how people perceive the difficulty of performing brain-computer interface (BCI) tasks, which specific aspects of mental workload contribute the most, and whether there is a difference in perceived workload between participants who are able-bodied and disabled. This study evaluated mental workload using the NASA Task Load Index (TLX), a multi-dimensional rating procedure with six subscales: Mental Demands, Physical Demands, Temporal Demands, Performance, Effort, and Frustration. Able-bodied and motor disabled participants completed the survey after performing EEG-based BCI Fitts' law target acquisition and phrase spelling tasks. The NASA-TLX scores were similar for able-bodied and disabled participants. For example, overall workload scores (range 0-100) for 1D horizontal tasks were 48.5 (SD = 17.7) and 46.6 (SD 10.3), respectively. The TLX can be used to inform the design of BCIs that will have greater usability by evaluating subjective workload between BCI tasks, participant groups, and control modalities. Practitioner Summary: Mental workload of brain-computer interfaces (BCI) can be evaluated with the NASA Task Load Index (TLX). The TLX is an effective tool for comparing subjective workload between BCI tasks, participant groups (able-bodied and disabled), and control modalities. The data can inform the design of BCIs that will have greater usability.

Mar 31, 2012

Barriers to and mediators of brain-computer interface user acceptance

Barriers to and mediators of brain-computer interface user acceptance: focus group findings.

Ergonomics. 2012 Mar 29

Authors: Blain-Moraes S, Schaff R, Gruis KL, Huggins JE, Wren PA

Abstract. Brain-computer interfaces (BCI) are designed to enable individuals with severe motor impairments such as amyotrophic lateral sclerosis (ALS) to communicate and control their environment. A focus group was conducted with individuals with ALS (n=8) and their caregivers (n=9) to determine the barriers to and mediators of BCI acceptance in this population. Two key categories emerged: personal factors and relational factors. Personal factors, which included physical, physiological and psychological concerns, were less important to participants than relational factors, which included corporeal, technological and social relations with the BCI. The importance of these relational factors was analysed with respect to published literature on actor-network theory (ANT) and disability, and concepts of voicelessness and personhood. Future directions for BCI research are recommended based on the emergent focus group themes. Practitioner Summary: This manuscript explores human factor issues involved in designing and evaluating brain-computer interface (BCI) systems for users with severe motor disabilities. Using participatory research paradigms and qualitative methods, this work draws attention to personal and relational factors that act as barriers to, or mediators of, user acceptance of this technology.

Mar 11, 2012

Augmenting cognition: old concept, new tools

The increasing miniaturization and computing power of information technology devices allow new ways of interaction between human brains and computers, progressively blurring the boundaries between man and machine. An example is provided by brain-computer interface systems, which allow users to use their brain to control the behavior of a computer or of an external device such as a robotic arm (in this latter case, we speak of “neuroprostetics”).

 

The idea of using information technologies to augment cognition, however, is not new, dating back in 1950’s and 1960’s. One of the first to write about this concept was british psychiatrist William Ross Ashby.

In his Introduction to Cybernetics (1956), he described intelligence as the “power of appropriate selection,” which could be amplified by means of technologies in the same way that physical power is amplified. A second major conceptual contribution towards the development of cognitive augmentation was provided few years later by computer scientist and Internet pioneer Joseph Licklider, in a paper entitled Man-Computer Symbiosis (1960).

In this article, Licklider envisions the development of computer technologies that will enable users “to think in interaction with a computer in the same way that you think with a colleague whose competence supplements your own.” According to his vision, the raise of computer networks would allow to connect together millions of human minds, within a “'thinking center' that will incorporate the functions of present-day libraries together with anticipated advances in information storage and retrieval.” This view represent a departure from the prevailing Artificial Intelligence approach of that time: instead of creating an artificial brain, Licklider focused on the possibility of developing new forms of interaction between human and information technologies, with the aim of extending human intelligence.

A similar view was proposed in the same years by another computer visionnaire, Douglas Engelbart, in its famous 1962 article entitled Augmenting Human Intellect: A Conceptual Framework.

In this report, Engelbart defines the goal of intelligence augmentation as “increasing the capability of a man to approach a complex problem situation, to gain comprehension to suit his particular needs, and to derive solutions to problems. Increased capability in this respect is taken to mean a mixture of the following: more-rapid comprehension, better comprehension, the possibility of gaining a useful degree of comprehension in a situation that previously was too complex, speedier solutions, better solutions, and the possibility of finding solutions to problems that before seemed insoluble (…) We do not speak of isolated clever tricks that help in particular situations.We refer to away of life in an integrated domain where hunches, cut-and-try, intangibles, and the human ‘feel for a situation’ usefully co-exist with powerful concepts, streamlined terminology and notation, sophisticated methods, and high-powered electronic aids.”

These “electronic aids” nowdays include any kind of harware and software computing devices used i.e. to store information in external memories, to process complex data, to perform routine tasks and to support decision making. However, today the concept of cognitive augmentation is not limited to the amplification of human intellectual abilities through external hardware. As recently noted by Nick Bostrom and Anders Sandberg (Sci Eng Ethics 15:311–341, 2009), “What is new is the growing interest in creating intimate links between the external systems and the human user through better interaction. The software becomes less an external tool and more of a mediating ‘‘exoself’’. This can be achieved through mediation, embedding the human within an augmenting ‘‘shell’’ such as wearable computers (…) or virtual reality, or through smart environments in which objects are given extended capabilities” (p. 320).

At the forefront of this trend is neurotechnology, an emerging research and development field which includes technologies that are specifically designed with the aim of improving brain function. Examples of neurotechnologies include brain training games such as BrainAge and programs like Fast ForWord, but also neurodevices used to monitor or regulate brain activity, such as deep brain stimulators (DBS), and smart prosthetics for the replacement of impaired sensory systems (i.e. cochlear or retinal implants).

Clearly, the vision of neurotechnology is not free of issues. The more they become powerful and sophisticated, the more attention should be dedicated to understand the socio-economic, legal and ethical implications of their applications in various field, from medicine to neuromarketing.


 

Jun 05, 2011

Human Computer Confluence

Human Computer Confluence (HC-CO) is an ambitious initiative recently launched by the European Commission under the Future and Emerging Technologies (FET) program, which fosters projects that investigate and demonstrate new possibilities “emerging at the confluence between the human and technological realms” (source: HC-CO website, EU Commission).

Such projects will examine new modalities for individual and group perception, actions and experience in augmented, virtual spaces. In particular, such virtual spaces would span the virtual reality continuum, also extending to purely synthetic but believable representation of massive, complex and dynamic data. HC-CO also fosters inter-disciplinary research (such as Presence, neuroscience, psychophysics, prosthetics, machine learning, computer science and engineering) towards delivering unified experiences and inventing radically new forms of perception/action.

HC-CO brings together ideas stemming from two series of Presence projects (the complete list is available here) with a vision of new forms of interaction and of new types of information spaces to interact with. It will develop the science and technologies necessary to ensure an effective, even transparent, bidirectional communication between humans and computers, which will in turn deliver a huge set of applications: from today's Presence concepts to new senses, to new perceptive capabilities dealing with more abstract information spaces to the social impact of such communication enabling technologies. Inevitably, these technologies question the notion of interface between the human and the technological realm, and thus, also in a fundamental way, put into question the nature of both.

The long-term implications can be profound and need to be considered from an ethical/societal point of view. HC-CO is, however, not a programme on human augmentation. It does not aim to create a super-human. The idea of confluence is to study what can be done by bringing new types of technologically enabled interaction modalities in between the human and a range of virtual (not necessarily naturalistic) realms. Its ambition is to bring our best understanding from human sciences into future and emerging technologies for a new and purposeful human computer symbiosis.

HC-CO is conceptually broken down into the following themes:

  • HC-CO Data. On-line perception and interaction with massive volumes of data: new methods to stimulate and use human sensory perception and cognition to interpret massive volumes of data in real time to enable assimilation, understanding and interaction with informational spaces. Research should find new ways to exploit human factors (sensory, perceptual and cognitive aspects), including the selection of the most effective sensory modalities, for data exploration. Although not explicitly mentioned, non-sensorial pathways, i.e., direct brain to computer and computer to brain communication could be explored.
  • HC-CO Transit. Unified experience, emerging from the unnoticeable transition from physical to augmented or virtual reality: new methods and concepts towards unobtrusive mixed or virtual reality environment (multi-modal displays, tracking systems, virtual representations...), and scenarios to support entirely unobtrusive interaction. Unobtrusiveness also applies to virtual representations, their dynamics, and the feedback received. Here the challenge is both technological and scientific, spanning human cognition, human machine interaction and machine intelligence disciplines.
  • HC-CO Sense. New forms of perception and action: invent and demonstrate new forms of interaction with the real world, virtual models or abstract information by provoking a mapping from an artificial medium to appropriate sensory modalities or brain regions. This research should reinforce data perception and unified experience by augmenting the human interaction capabilities and awareness in virtual spaces.

In sum, HC-CO is an emerging r&d field that holds the potential to revolutionize the way we interact with computers. Standing at the crossroad between cognitive science, computer science and artificial intelligence, HC-CO can provide the cyberpsychology and cybertherapy community with fresh concepts and interesting new tools to apply in both research and clinical domains.

More to explore:

  • HC-CO initiative: The official EU website the HC-CO initiative, which describes the broad objectives of this emerging research field. 
  • HC2 Project: The horizontal character of HC-CO makes it a fascinating and fertile interdisciplinary field, but it can also compromise its growth, with researchers scattered across disciplines and groups worldwide. For this reason a coordination activity promoting discipline connect, identity building and integration while defining future research, education and policy directions at the regional, national, European and international level has been created. This project is HC2, a three-year Coordination Action funded by the FP7 FET Proactive scheme. The consortium will draw on a wide network of researchers and stakeholders to achieve four key objectives: a) stimulate, structure and support the research community, promoting identity building; b) to consolidate research agendas with special attention to the interdisciplinary aspects of HC-CO; c) enhance the Public Understanding of HC-CO and foster the early contact of researchers with high-tech SMEs and other industry players; d) establish guidelines for the definition of new educational curricula to prepare the next generation of HC-CO researchers.
  • CEED Project: Funded by the HC-CO initiative, the Collective Experience of Empathic Data Systems (CEEDs) project aims to develop “novel, integrated technologies to support human experience, analysis and understanding of very large datasets”. CEEDS will develop innovative tools to exploit theories showing that discovery is the identification of patterns in complex data sets by the implicit information processing capabilities of the human brain. Implicit human responses will be identified by the CEEDs system’s analysis of its sensing systems, tuned to users’ bio-signals and non-verbal behaviours. By associating these implicit responses with different features of massive datasets, the CEEDs system will guide users’ discovery of patterns and meaning within the datasets.
  • VERE Project: VERE - Virtual Embodiment and Robotic Re-Embodiment – is another large project funded by the HC-CO initiative, which aims at “dissolving the boundary between the human body and surrogate representations in immersive virtual reality and physical reality”. Dissolving the boundary means that people have the illusion that their surrogate representation is their own body, and act and have thoughts that correspond to this. The work in VERE may be thought of as applied presence research and applied cognitive neuroscience.

1 2 3 4 5 6 Next