Apr 15, 2014

A Hybrid Brain Computer Interface System Based on the Neurophysiological Protocol and Brain-actuated Switch for Wheelchair Control

A Hybrid Brain Computer Interface System Based on the Neurophysiological Protocol and Brain-actuated Switch for Wheelchair Control.

J Neurosci Methods. 2014 Apr 5;

Authors: Cao L, Li J, Ji H, Jiang C

BACKGROUND: Brain Computer Interfaces (BCIs) are developed to translate brain waves into machine instructions for external devices control. Recently, hybrid BCI systems are proposed for the multi-degree control of a real wheelchair to improve the systematical efficiency of traditional BCIs. However, it is difficult for existing hybrid BCIs to implement the multi-dimensional control in one command cycle.
NEW METHOD: This paper proposes a novel hybrid BCI system that combines motor imagery (MI)-based bio-signals and steady-state visual evoked potentials (SSVEPs) to control the speed and direction of a real wheelchair synchronously. Furthermore, a hybrid modalities-based switch is firstly designed to turn on/off the control system of the wheelchair.
RESULTS: Two experiments were performed to assess the proposed BCI system. One was implemented for training and the other one conducted a wheelchair control task in the real environment. All subjects completed these tasks successfully and no collisions occurred in the real wheelchair control experiment.
COMPARISON WITH EXISTING METHOD(S): The protocol of our BCI gave much more control commands than those of previous MI and SSVEP-based BCIs. Comparing with other BCI wheelchair systems, the superiority reflected by the index of path length optimality ratio validated the high efficiency of our control strategy.
CONCLUSIONS: The results validated the efficiency of our hybrid BCI system to control the direction and speed of a real wheelchair as well as the reliability of hybrid signals-based switch control.

Mar 02, 2014

3D Thought controlled environment via Interaxon

In this demo video, artist Alex McLeod shows an environment he designed for Interaxon to use at CES in 2011 interaxon.ca/CES#.

The glasses display the scene in 3D and attaches sensors read users brain-states which control elements of the scene.

3D Thought controlled environment via Interaxon from Alex McLeod on Vimeo.

Feb 11, 2014

Neurocam

Via KurzweilAI.net

Keio University scientists have developed a “neurocam” — a wearable camera system that detects emotions, based on an analysis of the user’s brainwaves.

The hardware is a combination of Neurosky’s Mind Wave Mobile and a customized brainwave sensor.

The algorithm is based on measures of “interest” and “like” developed by Professor Mitsukura and the neurowear team.

The users interests are quantified on a range of 0 to 100. The camera automatically records five-second clips of scenes when the interest value exceeds 60, with timestamp and location, and can be replayed later and shared socially on Facebook.

The researchers plan to make the device smaller, more comfortable, and fashionable to wear.

Feb 02, 2014

A low-cost sonification system to assist the blind

Via KurzweilAI.net

An improved assistive technology system for the blind that uses sonification (visualization using sounds) has been developed by Universidad Carlos III de Madrid (UC3M) researchers, with the goal of replacing costly, bulky current systems.

Called Assistive Technology for Autonomous Displacement (ATAD), the system includes a stereo vision processor measures the difference of images captured by two cameras that are placed slightly apart (for image depth data) and calculates the distance to each point in the scene.

Then it transmits the information to the user by means of a sound code that gives information regarding the position and distance to the different obstacles,  using a small audio stereo amplifier and bone-conduction headphones.

Assistive Technology for Autonomous Displacement (ATAD) block diagram (credit: P. Revuelta Sanz et al.)

Jan 25, 2014

Chris Eliasmith – How to Build a Brain

Via Futuristic news

He’s the creator of “Spaun” the world’s largest brain simulation. Can he really make headway into mimicking the human brain?

Chris Eliasmith has cognitive flexibility on the brain. How do people manage to walk, chew gum and listen to music all at the same time? What is our brain doing as it switches between these tasks and how do we use the same components in head to do all those different things? These are questions that Chris and his team’s Semantic Pointer Architecture Unified Network (Spaun) are determined to answer. Spaun is currently the world’s largest functional brain simulation, and is unique because it’s the first model that can actually emulate behaviours while also modeling the physiology that underlies them.

 

This groundbreaking work was published in Science, and has been featured by CNN, BBC, Der Spiegel, Popular Science, The Economist and CBC.He is co-author of Neural Engineering , which describes a framework for building biologically realistic neural models and his new book, How to Build a Brain applies those methods to large-scale cognitive brain models.

Chris holds a Canada Research Chair in Theoretical Neuroscience at the University of Waterloo. He is also Director of Waterloo’s Centre for Theoretical Neuroscience, and is jointly appointed in the Philosophy, Systems Design Engineering departments, as well as being cross-appointed to Computer Science.
For more on Chris, visit http://arts.uwaterloo.ca/~celiasmi/
Source: TEDxTalks

Dec 24, 2013

Speaking and cognitive distractions during EEG-based brain control of a virtual neuroprosthesis-arm

Speaking and cognitive distractions during EEG-based brain control of a virtual neuroprosthesis-arm.

J Neuroeng Rehabil. 2013 Dec 21;10(1):116

Authors: Foldes ST, Taylor DM

BACKGROUND: Brain-computer interface (BCI) systems have been developed to provide paralyzed individuals the ability to command the movements of an assistive device using only their brain activity. BCI systems are typically tested in a controlled laboratory environment were the user is focused solely on the brain-control task. However, for practical use in everyday life people must be able to use their brain-controlled device while mentally engaged with the cognitive responsibilities of daily activities and while compensating for any inherent dynamics of the device itself. BCIs that use electroencephalography (EEG) for movement control are often assumed to require significant mental effort, thus preventing users from thinking about anything else while using their BCI. This study tested the impact of cognitive load as well as speaking on the ability to use an EEG-based BCI. FINDINGS: Six participants controlled the two-dimensional (2D) movements of a simulated neuroprosthesis-arm under three different levels of cognitive distraction. The two higher cognitive load conditions also required simultaneously speaking during BCI use. On average, movement performance declined during higher levels of cognitive distraction, but only by a limited amount. Movement completion time increased by 7.2%, the percentage of targets successfully acquired declined by 11%, and path efficiency declined by 8.6%. Only the decline in percentage of targets acquired and path efficiency were statistically significant (p < 0.05). CONCLUSION: People who have relatively good movement control of an EEG-based BCI may be able to speak and perform other cognitively engaging activities with only a minor drop in BCI-control performance.

Case hand prosthesis with sense of touch allows amputees to feel

Via Medgadget

There have been a few attempts at simulating a sense of touch in prosthetic hands, but a recently released video from Case Western Reserve University demonstrates newly developed haptic technology that looks convincingly impressive. Here’s a video of an amputee wearing a prosthetic hand with a sensor on the forefinger, while blindfolded and wearing headphones that block any hearing, pulling stems off of cherries. The first part of the video shows him doing it with the sensor turned off and then when it’s activated.

For a picture of the electrode technology, please visit:http://www.flickr.com/photos/tylerlab/10075384624/

NeuroOn mask improves sleep and helps manage jet lag

Via Medgadget

A group of Polish engineers is working on a smart sleeping mask that they hope will allow people to get more out of their resting time, as well as allow for unusual sleeping schedules that would particularly benefit those who are often on-call. The NeuroOn mask will have an embedded EEG for brain wave monitoring, EMG for detecting muscle motion on the face, and sensors that can track whether your pupils are moving and whether they are going through REM. The team is currently raising money on Kickstarter where you can pre-order your own NeuroOn once it’s developed into a final product.

Dec 21, 2013

New Scientist: Mind-reading light helps you stay in the zone

Re-blogged from New Scientist

WITH a click of a mouse, I set a path through the mountains for drone #4. It's one of five fliers under my control, all now heading to different destinations. Routes set, their automation takes over and my mind eases, bringing a moment of calm. But the machine watching my brain notices the lull, decides I can handle more, and drops a new drone in the south-east corner of the map.

The software is keeping my brain in a state of full focus known as flow, or being "in the zone". Too little work, and the program notices my attention start to flag and gives me more drones to handle. If I start to become a frazzled air traffic controller, the computer takes one of the drones off my plate, usually without me even noticing.

The system monitors the workload by pulsing light into my prefrontal cortex 12 times a second. The amount of light that oxygenated and deoxygenated haemoglobin in the blood there absorbs and reflects gives an indication of how mentally engaged I am. Harder brain work calls for more oxygenated blood, and changes how the light is absorbed. Software interprets the signal from this functional near infrared spectroscopy (fNIRS) and uses it to assign me the right level of work.

Dan Afergan, who is running the study at Tufts University in Medford, Massachusetts, points to an on-screen readout as I play. "It's predicting high workload with very high certainty, and, yup, number three just dropped off," he says over my shoulder. Sure enough, I'm now controlling just five drones again.

To achieve this mind-monitoring, I'm hooked up to a bulky rig of fibre-optic cables and have an array of LEDs stuck to my forehead. The cables stream off my head into a box that converts light signals to electrical ones. These fNIRS systems don't have to be this big, though. A team led by Sophie Piper at Charité University of Medicine in Berlin, Germany, tested a portable device on cyclists in Berlin earlier this year – the first time fNIRS has been done during an outdoor activity.

Afergan doesn't plan to be confined to the lab for long either. He's studying ways to integrate brain-activity measuring into the Google Glass wearable computer. A lab down the hall already has a prototype fNIRS system on a chip that could, with a few improvements, be built into a Glass headset. "Glass is already on your forehead. It's really not much of a stretch to imagine building fNIRS into the headband," he says.

Afergan is working on a Glass navigation system for use in cars that responds to a driver's level of focus. When they are concentrating hard, Glass will show only basic instructions, or perhaps just give audio directions. When the driver is focusing less, on a straight stretch of road perhaps, Glass will provide more details of the route. The team also plans to adapt Google Now – the company's digital assistant software – for Glass so that it only gives you notifications when your mind has room for them.

Peering into drivers' minds will become increasingly important, says Erin Solovey, a computer scientist at Drexel University in Philadelphia, Pennsylvania. Many cars have automatic systems for adaptive cruise control, keeping in the right lane and parking. These can help, but they also bring the risk that drivers may not stay focused on the task at hand, because they are relying on the automation.

Systems using fNIRS could monitor a driver's focus and adjust the level of automation to keep drivers safely engaged with what the car is doing, she says.

This article appeared in print under the headline "Stay in the zone"

Dec 08, 2013

Real-Time fMRI Pattern Decoding and Neurofeedback Using FRIEND: An FSL-Integrated BCI Toolbox

Real-Time fMRI Pattern Decoding and Neurofeedback Using FRIEND: An FSL-Integrated BCI Toolbox.

PLoS One. 2013;8(12):e81658

Authors: Sato JR, Basilio R, Paiva FF, Garrido GJ, Bramati IE, Bado P, Tovar-Moll F, Zahn R, Moll J

Abstract. The demonstration that humans can learn to modulate their own brain activity based on feedback of neurophysiological signals opened up exciting opportunities for fundamental and applied neuroscience. Although EEG-based neurofeedback has been long employed both in experimental and clinical investigation, functional MRI (fMRI)-based neurofeedback emerged as a promising method, given its superior spatial resolution and ability to gauge deep cortical and subcortical brain regions. In combination with improved computational approaches, such as pattern recognition analysis (e.g., Support Vector Machines, SVM), fMRI neurofeedback and brain decoding represent key innovations in the field of neuromodulation and functional plasticity. Expansion in this field and its applications critically depend on the existence of freely available, integrated and user-friendly tools for the neuroimaging research community. Here, we introduce FRIEND, a graphic-oriented user-friendly interface package for fMRI neurofeedback and real-time multivoxel pattern decoding. The package integrates routines for image preprocessing in real-time, ROI-based feedback (single-ROI BOLD level and functional connectivity) and brain decoding-based feedback using SVM. FRIEND delivers an intuitive graphic interface with flexible processing pipelines involving optimized procedures embedding widely validated packages, such as FSL and libSVM. In addition, a user-defined visual neurofeedback module allows users to easily design and run fMRI neurofeedback experiments using ROI-based or multivariate classification approaches. FRIEND is open-source and free for non-commercial use. Processing tutorials and extensive documentation are available.

Dec 02, 2013

A genetically engineered weight-loss implant

Via KurzweilAI

ETH-Zurich biotechnologists have constructed an implantable genetic regulatory circuit that monitors blood-fat levels. In response to excessive levels, it produces a messenger substance that signals satiety (fullness) to the body. Tests on obese mice revealed that this helps them lose weight.

implantable_slimming_aid

Genetically modified cells implanted in the body monitor the blood-fat level. If it is too high, they produce a satiety hormone. The animal stops eating and loses weight. (Credit: Martin Fussenegger / ETH Zurich / Jackson Lab)

 

Nov 16, 2013

NeuroPace Gets FDA Pre-Market Approval for RNS Stimulator

Via Medgadget

 

 

neuropace rms NeuroPace Gets FDA Pre Market Approval for RNS StimulatorNeuroPace has received FDA pre-market approval for the NeuroPace RNS System, used to treat medically refractory partial epilepsy. The battery powered device is implanted in the cranium and monitors electrical activity in the brain. If abnormal activity is detected, electrical impulses are sent to the seizure focus in the brain via leads, helping to prevent the onset of a seizure. The RNS System also comes with a programmer for physicians to non-invasively set the detection and stimulation parameters for the implanted device, and has the ability to view the patients electrocorticogram (ECoG) in real time and upload previously recorded ECoGs stored on the RNS implant.

Results from clinical studies show significant benefits for patients, with a 37.9% reduction in seizure frequency for subjects with active implants. Follow up with patients two years post-implant showed that over half experienced a reduction in seizures of 50% or more.

Oct 31, 2013

Brain Decoding

Via IEET

Neuroscientists are starting to decipher what a person is seeing, remembering and even dreaming just by looking at their brain activity. They call it brain decoding.  

In this Nature Video, we see three different uses of brain decoding, including a virtual reality experiment that could use brain activity to figure out whether someone has been to the scene of a crime.

Smart glasses that help the blind see

Via New Scientist

They look like snazzy sunglasses, but these computerised specs don't block the sun – they make the world a brighter place for people with partial vision.

These specs do more than bring blurry things into focus. This prototype pair of smart glasses translates visual information into images that blind people can see.

Many people who are registered as blind can perceive some light and motion. The glasses, developed by Stephen Hicks of the University of Oxford, are an attempt to make that residual vision as useful as possible.

They use two cameras, or a camera and an infrared projector, that can detect the distance to nearby objects. They also have a gyroscope, a compass and GPS to help orient the wearer.

The collected information can be translated into a variety of images on the transparent OLED displays, depending on what is most useful to the person sporting the shades. For example, objects can be made clearer against the background, or the distance to obstacles can be indicated by the varying brightness of an image.

Hicks has won the Royal Society's Brian Mercer Award for Innovation"" for his work on the smart glasses. He plans to use the £50,000 prize money to add object and text recognition to the glasses' abilities.

 

Sep 09, 2013

Effortless awareness: using real time neurofeedback to investigate correlates of posterior cingulate cortex activity in meditators' self-report

Effortless awareness: using real time neurofeedback to investigate correlates of posterior cingulate cortex activity in meditators' self-report.

Front Hum Neurosci. 2013;7:440

Authors: Garrison KA, Santoyo JF, Davis JH, Thornhill TA, Kerr CE, Brewer JA

Neurophenomenological studies seek to utilize first-person self-report to elucidate cognitive processes related to physiological data. Grounded theory offers an approach to the qualitative analysis of self-report, whereby theoretical constructs are derived from empirical data. Here we used grounded theory methodology (GTM) to assess how the first-person experience of meditation relates to neural activity in a core region of the default mode network-the posterior cingulate cortex (PCC). We analyzed first-person data consisting of meditators' accounts of their subjective experience during runs of a real time fMRI neurofeedback study of meditation, and third-person data consisting of corresponding feedback graphs of PCC activity during the same runs. We found that for meditators, the subjective experiences of "undistracted awareness" such as "concentration" and "observing sensory experience," and "effortless doing" such as "observing sensory experience," "not efforting," and "contentment," correspond with PCC deactivation. Further, the subjective experiences of "distracted awareness" such as "distraction" and "interpreting," and "controlling" such as "efforting" and "discontentment," correspond with PCC activation. Moreover, we derived several novel hypotheses about how specific qualities of cognitive processes during meditation relate to PCC activity, such as the difference between meditation and "trying to meditate." These findings offer novel insights into the relationship between meditation and mind wandering or self-related thinking and neural activity in the default mode network, driven by first-person reports.

Aug 07, 2013

Pupil responses allow communication in locked-in syndrome patients

Pupil responses allow communication in locked-in syndrome patients.

Josef Stoll et al., Current Biology, Volume 23, Issue 15, R647-R648, 5 August 2013

For patients with severe motor disabilities, a robust means of communication is a crucial factor for their well-being. We report here that pupil size measured by a bedside camera can be used to communicate with patients with locked-in syndrome. With the same protocol we demonstrate command-following for a patient in a minimally conscious state, suggesting its potential as a diagnostic tool for patients whose state of consciousness is in question. Importantly, neither training nor individual adjustment of our system’s decoding parameters were required for successful decoding of patients’ responses.

Paper full text PDF

dilated pupil Pupils Used to Communicate with People with Locked In Syndrome

Image credit: Flickr user Beth77

Jul 18, 2013

How to see with your ears

Via: KurzweilAI.net

A participant wearing camera glasses and listening to the soundscape (credit: Alastair Haigh/Frontiers in Psychology)

A device that trains the brain to turn sounds into images could be used as an alternative to invasive treatment for blind and partially-sighted people, researchers at the University of Bath have found.

“The vOICe” is a visual-to-auditory sensory substitution device that encodes images taken by a camera worn by the user into “soundscapes” from which experienced users can extract information about their surroundings.

It helps blind people use sounds to build an image in their minds of the things around them.

A research team, led by Dr Michael Proulx, from the University’s Department of Psychology, looked at how blindfolded sighted participants would do on an eye test using the device.

Read full story

Mar 03, 2013

Permanently implanted neuromuscolar electrodes allow natural control of a robotic prosthesis

Source: Chalmers University of Technology

 
Dr Rickard Brånemark tests the functionality of the world's first muscle and nerve control...
 
For the first time, a surgical team led by Dr Rickard Brånemark, Sahlgrenska University Hospital, has carried out the first operation of its kind, where neuromuscular electrodes that enable a prosthetic arm and hand to be controlled by thought have been permanently implanted into the nerves and muscles of an amputee.

“The new technology is a major breakthrough that has many advantages over current technology, which provides very limited functionality to patients with missing limbs,” Brånemark says.

Presently, robotic prostheses rely on electrodes over the skin to pick up the muscles electrical activity to drive few actions by the prosthesis. The problem with this approach is that normally only two functions are regained out of the tens of different movements an able-body is capable of. By using implanted electrodes, more signals can be retrieved, and therefore control of more movements is possible. Furthermore, it is also possible to provide the patient with natural perception, or “feeling”, through neural stimulation.

“We believe that implanted electrodes, together with a long-term stable human-machine interface provided by the osseointegrated implant, is a breakthrough that will pave the way for a new era in limb replacement,” says Rickard Brånemark.

Read full story

Feb 08, 2013

The Human Brain Project awarded $1.6 billion by the EU

At the end of January, the European Commission has officially announced the selection of the Human Brain Project (HBP) as one of its two FET Flagship projects. Federating more than 80 European and international research institutions, the Human Brain Project is planned to last ten years (2013-2023). The cost is estimated at 1.19 billion euros.

10cvtl5.jpg

The project is the first attempt to “reconstruct the brain piece by piece and building a virtual brain in a supercomputer”. Lead by neuroscientist Henry Markram, the project was launched in 2005 as a joint research initiative between the Brain Mind Institute at the École Polytechnique Fédérale de Lausanne (EPFL) and the information technology giant IBM.

supercomputer.533.jpg

Using the impressive processing power of IBM’s Blue Gene/L supercomputer, the project reached its first milestone in December 2006, simulating a rat cortical column. As of July 2012, Henry Markram’s team has achieved the simulation of mesocircuits containing approximately 1 million neurons and 1 billion synapses (which is comparable with the number of nerve cells present in a honey bee brain). The next step, planned in 2014, will be the modelling of a cellular rat brain, with 100 mesocircuits totalling a hundred million cells. Finally, the team plans to simulate a full human brain (86 billion neurons) by the year 2023

Watch the video overview of the Human Brain Project

Nov 11, 2012

Congenitally blind learn to see and read with soundscapes

Via KurzweilAI

Congenitally blind people have learned to ”see” and describe objects, and even identify letters and words, by using a visual-to-auditory sensory-substitution algorithm and sensory substitution devices (SSDs), scientists at Hebrew University and in France have found.

SSDs are non-invasive sensory aids that provide visual information to the blind via their existing senses. For example, using a visual-to-auditory SSD in a clinical or everyday setting, users wear a miniature camera connected to a small computer (or smart phone) and stereo headphones.

The images are converted into “soundscapes,” using an algorithm, allowing the user to listen to and then interpret the visual information coming from the camera. The blind participants using this device reach a level of visual acuity technically surpassing the criterion of the World Health Organization (WHO) for blindness.

Read the full story

1 2 3 4 5 Next