Aug 31, 2014
A new Northwestern Medicine study reports stimulating a particular region in the brain via non-invasive delivery of electrical current using magnetic pulses, called Transcranial Magnetic Stimulation, improves memory.
Jul 29, 2014
Real-time functional MRI neurofeedback: a tool for psychiatry.
Curr Opin Psychiatry. 2014 Jul 14;
Authors: Kim S, Birbaumer N
Abstract. PURPOSE OF REVIEW: The aim of this review is to provide a critical overview of recent research in the field of neuroscientific and clinical application of real-time functional MRI neurofeedback (rtfMRI-nf).
RECENT FINDINGS: RtfMRI-nf allows self-regulating activity in circumscribed brain areas and brain systems. Furthermore, the learned regulation of brain activity has an influence on specific behaviors organized by the regulated brain regions. Patients with mental disorders show abnormal activity in certain regions, and simultaneous control of these regions using rtfMRI-nf may affect the symptoms of related behavioral disorders. SUMMARY: The promising results in clinical application indicate that rtfMRI-nf and other metabolic neurofeedback, such as near-infrared spectroscopy, might become a potential therapeutic tool. Further research is still required to examine whether rtfMRI-nf is a useful tool for psychiatry because there is still lack of knowledge about the neural function of certain brain systems and about neuronal markers for specific mental illnesses.
MindRDR connects Google Glass with a device to monitor brain activity, allowing users to take pictures and socialise them on Twitter or Facebook.
Once a user has decided to share an image, we analyse their brain data and provide an evaluation of their ability to control the interface with their mind. This information is attached to every shared image.
The current version of MindRDR uses commercially available brain monitor Neurosky MindWave Mobile to extract core metrics from the mind.
Apr 15, 2014
A Hybrid Brain Computer Interface System Based on the Neurophysiological Protocol and Brain-actuated Switch for Wheelchair Control
A Hybrid Brain Computer Interface System Based on the Neurophysiological Protocol and Brain-actuated Switch for Wheelchair Control.
J Neurosci Methods. 2014 Apr 5;
Authors: Cao L, Li J, Ji H, Jiang C
BACKGROUND: Brain Computer Interfaces (BCIs) are developed to translate brain waves into machine instructions for external devices control. Recently, hybrid BCI systems are proposed for the multi-degree control of a real wheelchair to improve the systematical efficiency of traditional BCIs. However, it is difficult for existing hybrid BCIs to implement the multi-dimensional control in one command cycle.
NEW METHOD: This paper proposes a novel hybrid BCI system that combines motor imagery (MI)-based bio-signals and steady-state visual evoked potentials (SSVEPs) to control the speed and direction of a real wheelchair synchronously. Furthermore, a hybrid modalities-based switch is firstly designed to turn on/off the control system of the wheelchair.
RESULTS: Two experiments were performed to assess the proposed BCI system. One was implemented for training and the other one conducted a wheelchair control task in the real environment. All subjects completed these tasks successfully and no collisions occurred in the real wheelchair control experiment.
COMPARISON WITH EXISTING METHOD(S): The protocol of our BCI gave much more control commands than those of previous MI and SSVEP-based BCIs. Comparing with other BCI wheelchair systems, the superiority reflected by the index of path length optimality ratio validated the high efficiency of our control strategy.
CONCLUSIONS: The results validated the efficiency of our hybrid BCI system to control the direction and speed of a real wheelchair as well as the reliability of hybrid signals-based switch control.
Mar 02, 2014
In this demo video, artist Alex McLeod shows an environment he designed for Interaxon to use at CES in 2011 interaxon.ca/CES#.
The glasses display the scene in 3D and attaches sensors read users brain-states which control elements of the scene.
Feb 11, 2014
Keio University scientists have developed a “neurocam” — a wearable camera system that detects emotions, based on an analysis of the user’s brainwaves.
The hardware is a combination of Neurosky’s Mind Wave Mobile and a customized brainwave sensor.
The users interests are quantified on a range of 0 to 100. The camera automatically records five-second clips of scenes when the interest value exceeds 60, with timestamp and location, and can be replayed later and shared socially on Facebook.
The researchers plan to make the device smaller, more comfortable, and fashionable to wear.
Feb 02, 2014
An improved assistive technology system for the blind that uses sonification (visualization using sounds) has been developed by Universidad Carlos III de Madrid (UC3M) researchers, with the goal of replacing costly, bulky current systems.
Called Assistive Technology for Autonomous Displacement (ATAD), the system includes a stereo vision processor measures the difference of images captured by two cameras that are placed slightly apart (for image depth data) and calculates the distance to each point in the scene.
Then it transmits the information to the user by means of a sound code that gives information regarding the position and distance to the different obstacles, using a small audio stereo amplifier and bone-conduction headphones.
Jan 25, 2014
Via Futuristic news
He’s the creator of “Spaun” the world’s largest brain simulation. Can he really make headway into mimicking the human brain?
Chris Eliasmith has cognitive flexibility on the brain. How do people manage to walk, chew gum and listen to music all at the same time? What is our brain doing as it switches between these tasks and how do we use the same components in head to do all those different things? These are questions that Chris and his team’s Semantic Pointer Architecture Unified Network (Spaun) are determined to answer. Spaun is currently the world’s largest functional brain simulation, and is unique because it’s the first model that can actually emulate behaviours while also modeling the physiology that underlies them.
This groundbreaking work was published in Science, and has been featured by CNN, BBC, Der Spiegel, Popular Science, The Economist and CBC.He is co-author of Neural Engineering , which describes a framework for building biologically realistic neural models and his new book, How to Build a Brain applies those methods to large-scale cognitive brain models.
Chris holds a Canada Research Chair in Theoretical Neuroscience at the University of Waterloo. He is also Director of Waterloo’s Centre for Theoretical Neuroscience, and is jointly appointed in the Philosophy, Systems Design Engineering departments, as well as being cross-appointed to Computer Science.
For more on Chris, visit http://arts.uwaterloo.ca/~celiasmi/
Dec 24, 2013
Speaking and cognitive distractions during EEG-based brain control of a virtual neuroprosthesis-arm.
J Neuroeng Rehabil. 2013 Dec 21;10(1):116
Authors: Foldes ST, Taylor DM
BACKGROUND: Brain-computer interface (BCI) systems have been developed to provide paralyzed individuals the ability to command the movements of an assistive device using only their brain activity. BCI systems are typically tested in a controlled laboratory environment were the user is focused solely on the brain-control task. However, for practical use in everyday life people must be able to use their brain-controlled device while mentally engaged with the cognitive responsibilities of daily activities and while compensating for any inherent dynamics of the device itself. BCIs that use electroencephalography (EEG) for movement control are often assumed to require significant mental effort, thus preventing users from thinking about anything else while using their BCI. This study tested the impact of cognitive load as well as speaking on the ability to use an EEG-based BCI. FINDINGS: Six participants controlled the two-dimensional (2D) movements of a simulated neuroprosthesis-arm under three different levels of cognitive distraction. The two higher cognitive load conditions also required simultaneously speaking during BCI use. On average, movement performance declined during higher levels of cognitive distraction, but only by a limited amount. Movement completion time increased by 7.2%, the percentage of targets successfully acquired declined by 11%, and path efficiency declined by 8.6%. Only the decline in percentage of targets acquired and path efficiency were statistically significant (p < 0.05). CONCLUSION: People who have relatively good movement control of an EEG-based BCI may be able to speak and perform other cognitively engaging activities with only a minor drop in BCI-control performance.
There have been a few attempts at simulating a sense of touch in prosthetic hands, but a recently released video from Case Western Reserve University demonstrates newly developed haptic technology that looks convincingly impressive. Here’s a video of an amputee wearing a prosthetic hand with a sensor on the forefinger, while blindfolded and wearing headphones that block any hearing, pulling stems off of cherries. The first part of the video shows him doing it with the sensor turned off and then when it’s activated.
For a picture of the electrode technology, please visit:http://www.flickr.com/photos/tylerlab/10075384624/
A group of Polish engineers is working on a smart sleeping mask that they hope will allow people to get more out of their resting time, as well as allow for unusual sleeping schedules that would particularly benefit those who are often on-call. The NeuroOn mask will have an embedded EEG for brain wave monitoring, EMG for detecting muscle motion on the face, and sensors that can track whether your pupils are moving and whether they are going through REM. The team is currently raising money on Kickstarter where you can pre-order your own NeuroOn once it’s developed into a final product.
Dec 21, 2013
Re-blogged from New Scientist
WITH a click of a mouse, I set a path through the mountains for drone #4. It's one of five fliers under my control, all now heading to different destinations. Routes set, their automation takes over and my mind eases, bringing a moment of calm. But the machine watching my brain notices the lull, decides I can handle more, and drops a new drone in the south-east corner of the map.
The software is keeping my brain in a state of full focus known as flow, or being "in the zone". Too little work, and the program notices my attention start to flag and gives me more drones to handle. If I start to become a frazzled air traffic controller, the computer takes one of the drones off my plate, usually without me even noticing.
The system monitors the workload by pulsing light into my prefrontal cortex 12 times a second. The amount of light that oxygenated and deoxygenated haemoglobin in the blood there absorbs and reflects gives an indication of how mentally engaged I am. Harder brain work calls for more oxygenated blood, and changes how the light is absorbed. Software interprets the signal from this functional near infrared spectroscopy (fNIRS) and uses it to assign me the right level of work.
Dan Afergan, who is running the study at Tufts University in Medford, Massachusetts, points to an on-screen readout as I play. "It's predicting high workload with very high certainty, and, yup, number three just dropped off," he says over my shoulder. Sure enough, I'm now controlling just five drones again.
To achieve this mind-monitoring, I'm hooked up to a bulky rig of fibre-optic cables and have an array of LEDs stuck to my forehead. The cables stream off my head into a box that converts light signals to electrical ones. These fNIRS systems don't have to be this big, though. A team led by Sophie Piper at Charité University of Medicine in Berlin, Germany, tested a portable device on cyclists in Berlin earlier this year – the first time fNIRS has been done during an outdoor activity.
Afergan doesn't plan to be confined to the lab for long either. He's studying ways to integrate brain-activity measuring into the Google Glass wearable computer. A lab down the hall already has a prototype fNIRS system on a chip that could, with a few improvements, be built into a Glass headset. "Glass is already on your forehead. It's really not much of a stretch to imagine building fNIRS into the headband," he says.
Afergan is working on a Glass navigation system for use in cars that responds to a driver's level of focus. When they are concentrating hard, Glass will show only basic instructions, or perhaps just give audio directions. When the driver is focusing less, on a straight stretch of road perhaps, Glass will provide more details of the route. The team also plans to adapt Google Now – the company's digital assistant software – for Glass so that it only gives you notifications when your mind has room for them.
Peering into drivers' minds will become increasingly important, says Erin Solovey, a computer scientist at Drexel University in Philadelphia, Pennsylvania. Many cars have automatic systems for adaptive cruise control, keeping in the right lane and parking. These can help, but they also bring the risk that drivers may not stay focused on the task at hand, because they are relying on the automation.
Systems using fNIRS could monitor a driver's focus and adjust the level of automation to keep drivers safely engaged with what the car is doing, she says.
This article appeared in print under the headline "Stay in the zone"
Dec 08, 2013
Real-Time fMRI Pattern Decoding and Neurofeedback Using FRIEND: An FSL-Integrated BCI Toolbox.
PLoS One. 2013;8(12):e81658
Authors: Sato JR, Basilio R, Paiva FF, Garrido GJ, Bramati IE, Bado P, Tovar-Moll F, Zahn R, Moll J
Abstract. The demonstration that humans can learn to modulate their own brain activity based on feedback of neurophysiological signals opened up exciting opportunities for fundamental and applied neuroscience. Although EEG-based neurofeedback has been long employed both in experimental and clinical investigation, functional MRI (fMRI)-based neurofeedback emerged as a promising method, given its superior spatial resolution and ability to gauge deep cortical and subcortical brain regions. In combination with improved computational approaches, such as pattern recognition analysis (e.g., Support Vector Machines, SVM), fMRI neurofeedback and brain decoding represent key innovations in the field of neuromodulation and functional plasticity. Expansion in this field and its applications critically depend on the existence of freely available, integrated and user-friendly tools for the neuroimaging research community. Here, we introduce FRIEND, a graphic-oriented user-friendly interface package for fMRI neurofeedback and real-time multivoxel pattern decoding. The package integrates routines for image preprocessing in real-time, ROI-based feedback (single-ROI BOLD level and functional connectivity) and brain decoding-based feedback using SVM. FRIEND delivers an intuitive graphic interface with flexible processing pipelines involving optimized procedures embedding widely validated packages, such as FSL and libSVM. In addition, a user-defined visual neurofeedback module allows users to easily design and run fMRI neurofeedback experiments using ROI-based or multivariate classification approaches. FRIEND is open-source and free for non-commercial use. Processing tutorials and extensive documentation are available.
Dec 02, 2013
ETH-Zurich biotechnologists have constructed an implantable genetic regulatory circuit that monitors blood-fat levels. In response to excessive levels, it produces a messenger substance that signals satiety (fullness) to the body. Tests on obese mice revealed that this helps them lose weight.
Genetically modified cells implanted in the body monitor the blood-fat level. If it is too high, they produce a satiety hormone. The animal stops eating and loses weight. (Credit: Martin Fussenegger / ETH Zurich / Jackson Lab)
Nov 16, 2013
NeuroPace has received FDA pre-market approval for the NeuroPace RNS System, used to treat medically refractory partial epilepsy. The battery powered device is implanted in the cranium and monitors electrical activity in the brain. If abnormal activity is detected, electrical impulses are sent to the seizure focus in the brain via leads, helping to prevent the onset of a seizure. The RNS System also comes with a programmer for physicians to non-invasively set the detection and stimulation parameters for the implanted device, and has the ability to view the patients electrocorticogram (ECoG) in real time and upload previously recorded ECoGs stored on the RNS implant.
Results from clinical studies show significant benefits for patients, with a 37.9% reduction in seizure frequency for subjects with active implants. Follow up with patients two years post-implant showed that over half experienced a reduction in seizures of 50% or more.
Oct 31, 2013
Neuroscientists are starting to decipher what a person is seeing, remembering and even dreaming just by looking at their brain activity. They call it brain decoding.
In this Nature Video, we see three different uses of brain decoding, including a virtual reality experiment that could use brain activity to figure out whether someone has been to the scene of a crime.
Via New Scientist
They look like snazzy sunglasses, but these computerised specs don't block the sun – they make the world a brighter place for people with partial vision.
These specs do more than bring blurry things into focus. This prototype pair of smart glasses translates visual information into images that blind people can see.
Many people who are registered as blind can perceive some light and motion. The glasses, developed by Stephen Hicks of the University of Oxford, are an attempt to make that residual vision as useful as possible.
They use two cameras, or a camera and an infrared projector, that can detect the distance to nearby objects. They also have a gyroscope, a compass and GPS to help orient the wearer.
The collected information can be translated into a variety of images on the transparent OLED displays, depending on what is most useful to the person sporting the shades. For example, objects can be made clearer against the background, or the distance to obstacles can be indicated by the varying brightness of an image.
Hicks has won the Royal Society's Brian Mercer Award for Innovation"" for his work on the smart glasses. He plans to use the £50,000 prize money to add object and text recognition to the glasses' abilities.
Sep 09, 2013
Effortless awareness: using real time neurofeedback to investigate correlates of posterior cingulate cortex activity in meditators' self-report
Effortless awareness: using real time neurofeedback to investigate correlates of posterior cingulate cortex activity in meditators' self-report.
Front Hum Neurosci. 2013;7:440
Authors: Garrison KA, Santoyo JF, Davis JH, Thornhill TA, Kerr CE, Brewer JA
Neurophenomenological studies seek to utilize first-person self-report to elucidate cognitive processes related to physiological data. Grounded theory offers an approach to the qualitative analysis of self-report, whereby theoretical constructs are derived from empirical data. Here we used grounded theory methodology (GTM) to assess how the first-person experience of meditation relates to neural activity in a core region of the default mode network-the posterior cingulate cortex (PCC). We analyzed first-person data consisting of meditators' accounts of their subjective experience during runs of a real time fMRI neurofeedback study of meditation, and third-person data consisting of corresponding feedback graphs of PCC activity during the same runs. We found that for meditators, the subjective experiences of "undistracted awareness" such as "concentration" and "observing sensory experience," and "effortless doing" such as "observing sensory experience," "not efforting," and "contentment," correspond with PCC deactivation. Further, the subjective experiences of "distracted awareness" such as "distraction" and "interpreting," and "controlling" such as "efforting" and "discontentment," correspond with PCC activation. Moreover, we derived several novel hypotheses about how specific qualities of cognitive processes during meditation relate to PCC activity, such as the difference between meditation and "trying to meditate." These findings offer novel insights into the relationship between meditation and mind wandering or self-related thinking and neural activity in the default mode network, driven by first-person reports.
Aug 07, 2013
Pupil responses allow communication in locked-in syndrome patients.
Josef Stoll et al., Current Biology, Volume 23, Issue 15, R647-R648, 5 August 2013
For patients with severe motor disabilities, a robust means of communication is a crucial factor for their well-being. We report here that pupil size measured by a bedside camera can be used to communicate with patients with locked-in syndrome. With the same protocol we demonstrate command-following for a patient in a minimally conscious state, suggesting its potential as a diagnostic tool for patients whose state of consciousness is in question. Importantly, neither training nor individual adjustment of our system’s decoding parameters were required for successful decoding of patients’ responses.
Paper full text PDF
Image credit: Flickr user Beth77
Jul 18, 2013
A participant wearing camera glasses and listening to the soundscape (credit: Alastair Haigh/Frontiers in Psychology)
A device that trains the brain to turn sounds into images could be used as an alternative to invasive treatment for blind and partially-sighted people, researchers at the University of Bath have found.
“The vOICe” is a visual-to-auditory sensory substitution device that encodes images taken by a camera worn by the user into “soundscapes” from which experienced users can extract information about their surroundings.
It helps blind people use sounds to build an image in their minds of the things around them.
A research team, led by Dr Michael Proulx, from the University’s Department of Psychology, looked at how blindfolded sighted participants would do on an eye test using the device.
Read full story