Ok

By continuing your visit to this site, you accept the use of cookies. These ensure the smooth running of our services. Learn more.

Dec 08, 2013

Real-Time fMRI Pattern Decoding and Neurofeedback Using FRIEND: An FSL-Integrated BCI Toolbox

Real-Time fMRI Pattern Decoding and Neurofeedback Using FRIEND: An FSL-Integrated BCI Toolbox.

PLoS One. 2013;8(12):e81658

Authors: Sato JR, Basilio R, Paiva FF, Garrido GJ, Bramati IE, Bado P, Tovar-Moll F, Zahn R, Moll J

Abstract. The demonstration that humans can learn to modulate their own brain activity based on feedback of neurophysiological signals opened up exciting opportunities for fundamental and applied neuroscience. Although EEG-based neurofeedback has been long employed both in experimental and clinical investigation, functional MRI (fMRI)-based neurofeedback emerged as a promising method, given its superior spatial resolution and ability to gauge deep cortical and subcortical brain regions. In combination with improved computational approaches, such as pattern recognition analysis (e.g., Support Vector Machines, SVM), fMRI neurofeedback and brain decoding represent key innovations in the field of neuromodulation and functional plasticity. Expansion in this field and its applications critically depend on the existence of freely available, integrated and user-friendly tools for the neuroimaging research community. Here, we introduce FRIEND, a graphic-oriented user-friendly interface package for fMRI neurofeedback and real-time multivoxel pattern decoding. The package integrates routines for image preprocessing in real-time, ROI-based feedback (single-ROI BOLD level and functional connectivity) and brain decoding-based feedback using SVM. FRIEND delivers an intuitive graphic interface with flexible processing pipelines involving optimized procedures embedding widely validated packages, such as FSL and libSVM. In addition, a user-defined visual neurofeedback module allows users to easily design and run fMRI neurofeedback experiments using ROI-based or multivariate classification approaches. FRIEND is open-source and free for non-commercial use. Processing tutorials and extensive documentation are available.

Dec 02, 2013

A genetically engineered weight-loss implant

Via KurzweilAI

ETH-Zurich biotechnologists have constructed an implantable genetic regulatory circuit that monitors blood-fat levels. In response to excessive levels, it produces a messenger substance that signals satiety (fullness) to the body. Tests on obese mice revealed that this helps them lose weight.

implantable_slimming_aid

Genetically modified cells implanted in the body monitor the blood-fat level. If it is too high, they produce a satiety hormone. The animal stops eating and loses weight. (Credit: Martin Fussenegger / ETH Zurich / Jackson Lab)

 

Nov 16, 2013

NeuroPace Gets FDA Pre-Market Approval for RNS Stimulator

Via Medgadget

 

 

neuropace rms NeuroPace Gets FDA Pre Market Approval for RNS StimulatorNeuroPace has received FDA pre-market approval for the NeuroPace RNS System, used to treat medically refractory partial epilepsy. The battery powered device is implanted in the cranium and monitors electrical activity in the brain. If abnormal activity is detected, electrical impulses are sent to the seizure focus in the brain via leads, helping to prevent the onset of a seizure. The RNS System also comes with a programmer for physicians to non-invasively set the detection and stimulation parameters for the implanted device, and has the ability to view the patients electrocorticogram (ECoG) in real time and upload previously recorded ECoGs stored on the RNS implant.

Results from clinical studies show significant benefits for patients, with a 37.9% reduction in seizure frequency for subjects with active implants. Follow up with patients two years post-implant showed that over half experienced a reduction in seizures of 50% or more.

Oct 31, 2013

Brain Decoding

Via IEET

Neuroscientists are starting to decipher what a person is seeing, remembering and even dreaming just by looking at their brain activity. They call it brain decoding.  

In this Nature Video, we see three different uses of brain decoding, including a virtual reality experiment that could use brain activity to figure out whether someone has been to the scene of a crime.

Smart glasses that help the blind see

Via New Scientist

They look like snazzy sunglasses, but these computerised specs don't block the sun – they make the world a brighter place for people with partial vision.

These specs do more than bring blurry things into focus. This prototype pair of smart glasses translates visual information into images that blind people can see.

Many people who are registered as blind can perceive some light and motion. The glasses, developed by Stephen Hicks of the University of Oxford, are an attempt to make that residual vision as useful as possible.

They use two cameras, or a camera and an infrared projector, that can detect the distance to nearby objects. They also have a gyroscope, a compass and GPS to help orient the wearer.

The collected information can be translated into a variety of images on the transparent OLED displays, depending on what is most useful to the person sporting the shades. For example, objects can be made clearer against the background, or the distance to obstacles can be indicated by the varying brightness of an image.

Hicks has won the Royal Society's Brian Mercer Award for Innovation"" for his work on the smart glasses. He plans to use the £50,000 prize money to add object and text recognition to the glasses' abilities.

 

Sep 09, 2013

Effortless awareness: using real time neurofeedback to investigate correlates of posterior cingulate cortex activity in meditators' self-report

Effortless awareness: using real time neurofeedback to investigate correlates of posterior cingulate cortex activity in meditators' self-report.

Front Hum Neurosci. 2013;7:440

Authors: Garrison KA, Santoyo JF, Davis JH, Thornhill TA, Kerr CE, Brewer JA

Neurophenomenological studies seek to utilize first-person self-report to elucidate cognitive processes related to physiological data. Grounded theory offers an approach to the qualitative analysis of self-report, whereby theoretical constructs are derived from empirical data. Here we used grounded theory methodology (GTM) to assess how the first-person experience of meditation relates to neural activity in a core region of the default mode network-the posterior cingulate cortex (PCC). We analyzed first-person data consisting of meditators' accounts of their subjective experience during runs of a real time fMRI neurofeedback study of meditation, and third-person data consisting of corresponding feedback graphs of PCC activity during the same runs. We found that for meditators, the subjective experiences of "undistracted awareness" such as "concentration" and "observing sensory experience," and "effortless doing" such as "observing sensory experience," "not efforting," and "contentment," correspond with PCC deactivation. Further, the subjective experiences of "distracted awareness" such as "distraction" and "interpreting," and "controlling" such as "efforting" and "discontentment," correspond with PCC activation. Moreover, we derived several novel hypotheses about how specific qualities of cognitive processes during meditation relate to PCC activity, such as the difference between meditation and "trying to meditate." These findings offer novel insights into the relationship between meditation and mind wandering or self-related thinking and neural activity in the default mode network, driven by first-person reports.

Aug 07, 2013

Pupil responses allow communication in locked-in syndrome patients

Pupil responses allow communication in locked-in syndrome patients.

Josef Stoll et al., Current Biology, Volume 23, Issue 15, R647-R648, 5 August 2013

For patients with severe motor disabilities, a robust means of communication is a crucial factor for their well-being. We report here that pupil size measured by a bedside camera can be used to communicate with patients with locked-in syndrome. With the same protocol we demonstrate command-following for a patient in a minimally conscious state, suggesting its potential as a diagnostic tool for patients whose state of consciousness is in question. Importantly, neither training nor individual adjustment of our system’s decoding parameters were required for successful decoding of patients’ responses.

Paper full text PDF

dilated pupil Pupils Used to Communicate with People with Locked In Syndrome

Image credit: Flickr user Beth77

Jul 18, 2013

How to see with your ears

Via: KurzweilAI.net

A participant wearing camera glasses and listening to the soundscape (credit: Alastair Haigh/Frontiers in Psychology)

A device that trains the brain to turn sounds into images could be used as an alternative to invasive treatment for blind and partially-sighted people, researchers at the University of Bath have found.

“The vOICe” is a visual-to-auditory sensory substitution device that encodes images taken by a camera worn by the user into “soundscapes” from which experienced users can extract information about their surroundings.

It helps blind people use sounds to build an image in their minds of the things around them.

A research team, led by Dr Michael Proulx, from the University’s Department of Psychology, looked at how blindfolded sighted participants would do on an eye test using the device.

Read full story

Mar 03, 2013

Permanently implanted neuromuscolar electrodes allow natural control of a robotic prosthesis

Source: Chalmers University of Technology

 
Dr Rickard Brånemark tests the functionality of the world's first muscle and nerve control...
 
For the first time, a surgical team led by Dr Rickard Brånemark, Sahlgrenska University Hospital, has carried out the first operation of its kind, where neuromuscular electrodes that enable a prosthetic arm and hand to be controlled by thought have been permanently implanted into the nerves and muscles of an amputee.

“The new technology is a major breakthrough that has many advantages over current technology, which provides very limited functionality to patients with missing limbs,” Brånemark says.

Presently, robotic prostheses rely on electrodes over the skin to pick up the muscles electrical activity to drive few actions by the prosthesis. The problem with this approach is that normally only two functions are regained out of the tens of different movements an able-body is capable of. By using implanted electrodes, more signals can be retrieved, and therefore control of more movements is possible. Furthermore, it is also possible to provide the patient with natural perception, or “feeling”, through neural stimulation.

“We believe that implanted electrodes, together with a long-term stable human-machine interface provided by the osseointegrated implant, is a breakthrough that will pave the way for a new era in limb replacement,” says Rickard Brånemark.

Read full story

Feb 08, 2013

The Human Brain Project awarded $1.6 billion by the EU

At the end of January, the European Commission has officially announced the selection of the Human Brain Project (HBP) as one of its two FET Flagship projects. Federating more than 80 European and international research institutions, the Human Brain Project is planned to last ten years (2013-2023). The cost is estimated at 1.19 billion euros.

10cvtl5.jpg

The project is the first attempt to “reconstruct the brain piece by piece and building a virtual brain in a supercomputer”. Lead by neuroscientist Henry Markram, the project was launched in 2005 as a joint research initiative between the Brain Mind Institute at the École Polytechnique Fédérale de Lausanne (EPFL) and the information technology giant IBM.

supercomputer.533.jpg

Using the impressive processing power of IBM’s Blue Gene/L supercomputer, the project reached its first milestone in December 2006, simulating a rat cortical column. As of July 2012, Henry Markram’s team has achieved the simulation of mesocircuits containing approximately 1 million neurons and 1 billion synapses (which is comparable with the number of nerve cells present in a honey bee brain). The next step, planned in 2014, will be the modelling of a cellular rat brain, with 100 mesocircuits totalling a hundred million cells. Finally, the team plans to simulate a full human brain (86 billion neurons) by the year 2023

Watch the video overview of the Human Brain Project

Nov 11, 2012

Congenitally blind learn to see and read with soundscapes

Via KurzweilAI

Congenitally blind people have learned to ”see” and describe objects, and even identify letters and words, by using a visual-to-auditory sensory-substitution algorithm and sensory substitution devices (SSDs), scientists at Hebrew University and in France have found.

SSDs are non-invasive sensory aids that provide visual information to the blind via their existing senses. For example, using a visual-to-auditory SSD in a clinical or everyday setting, users wear a miniature camera connected to a small computer (or smart phone) and stereo headphones.

The images are converted into “soundscapes,” using an algorithm, allowing the user to listen to and then interpret the visual information coming from the camera. The blind participants using this device reach a level of visual acuity technically surpassing the criterion of the World Health Organization (WHO) for blindness.

Read the full story

Aug 04, 2012

The Virtual Brain

The Virtual Brain project promises "to deliver the first open simulation of the human brain based on individual large-scale connectivity", by "employing novel concepts from neuroscience, effectively reducing the complexity of the brain simulation while still keeping it sufficiently realistic".

The Virtual Brain team includes well-recognized neuroscientists from all over the world. In the video below, Dr. Randy McIntosh explains what the project is about.

First teaser release of The Virtual Brain software suite is available for download – for Windows, Mac and Linux: http://thevirtualbrain.org/

Jul 05, 2012

A Real-Time fMRI-Based Spelling Device Immediately Enabling Robust Motor-Independent Communication

A Real-Time fMRI-Based Spelling Device Immediately Enabling Robust Motor-Independent Communication.

Curr Biol. 2012 Jun 27

Authors: Sorger B, Reithler J, Dahmen B, Goebel R

Human communication entirely depends on the functional integrity of the neuromuscular system. This is devastatingly illustrated in clinical conditions such as the so-called locked-in syndrome (LIS) [1], in which severely motor-disabled patients become incapable to communicate naturally-while being fully conscious and awake. For the last 20 years, research on motor-independent communication has focused on developing brain-computer interfaces (BCIs) implementing neuroelectric signals for communication (e.g., [2-7]), and BCIs based on electroencephalography (EEG) have already been applied successfully to concerned patients [8-11]. However, not all patients achieve proficiency in EEG-based BCI control [12]. Thus, more recently, hemodynamic brain signals have also been explored for BCI purposes [13-16]. Here, we introduce the first spelling device based on fMRI. By exploiting spatiotemporal characteristics of hemodynamic responses, evoked by performing differently timed mental imagery tasks, our novel letter encoding technique allows translating any freely chosen answer (letter-by-letter) into reliable and differentiable single-trial fMRI signals. Most importantly, automated letter decoding in real time enables back-and-forth communication within a single scanning session. Because the suggested spelling device requires only little effort and pretraining, it is immediately operational and possesses high potential for clinical applications, both in terms of diagnostics and establishing short-term communication with nonresponsive and severely motor-impaired patients.

Mar 11, 2012

Augmenting cognition: old concept, new tools

The increasing miniaturization and computing power of information technology devices allow new ways of interaction between human brains and computers, progressively blurring the boundaries between man and machine. An example is provided by brain-computer interface systems, which allow users to use their brain to control the behavior of a computer or of an external device such as a robotic arm (in this latter case, we speak of “neuroprostetics”).

 

The idea of using information technologies to augment cognition, however, is not new, dating back in 1950’s and 1960’s. One of the first to write about this concept was british psychiatrist William Ross Ashby.

In his Introduction to Cybernetics (1956), he described intelligence as the “power of appropriate selection,” which could be amplified by means of technologies in the same way that physical power is amplified. A second major conceptual contribution towards the development of cognitive augmentation was provided few years later by computer scientist and Internet pioneer Joseph Licklider, in a paper entitled Man-Computer Symbiosis (1960).

In this article, Licklider envisions the development of computer technologies that will enable users “to think in interaction with a computer in the same way that you think with a colleague whose competence supplements your own.” According to his vision, the raise of computer networks would allow to connect together millions of human minds, within a “'thinking center' that will incorporate the functions of present-day libraries together with anticipated advances in information storage and retrieval.” This view represent a departure from the prevailing Artificial Intelligence approach of that time: instead of creating an artificial brain, Licklider focused on the possibility of developing new forms of interaction between human and information technologies, with the aim of extending human intelligence.

A similar view was proposed in the same years by another computer visionnaire, Douglas Engelbart, in its famous 1962 article entitled Augmenting Human Intellect: A Conceptual Framework.

In this report, Engelbart defines the goal of intelligence augmentation as “increasing the capability of a man to approach a complex problem situation, to gain comprehension to suit his particular needs, and to derive solutions to problems. Increased capability in this respect is taken to mean a mixture of the following: more-rapid comprehension, better comprehension, the possibility of gaining a useful degree of comprehension in a situation that previously was too complex, speedier solutions, better solutions, and the possibility of finding solutions to problems that before seemed insoluble (…) We do not speak of isolated clever tricks that help in particular situations.We refer to away of life in an integrated domain where hunches, cut-and-try, intangibles, and the human ‘feel for a situation’ usefully co-exist with powerful concepts, streamlined terminology and notation, sophisticated methods, and high-powered electronic aids.”

These “electronic aids” nowdays include any kind of harware and software computing devices used i.e. to store information in external memories, to process complex data, to perform routine tasks and to support decision making. However, today the concept of cognitive augmentation is not limited to the amplification of human intellectual abilities through external hardware. As recently noted by Nick Bostrom and Anders Sandberg (Sci Eng Ethics 15:311–341, 2009), “What is new is the growing interest in creating intimate links between the external systems and the human user through better interaction. The software becomes less an external tool and more of a mediating ‘‘exoself’’. This can be achieved through mediation, embedding the human within an augmenting ‘‘shell’’ such as wearable computers (…) or virtual reality, or through smart environments in which objects are given extended capabilities” (p. 320).

At the forefront of this trend is neurotechnology, an emerging research and development field which includes technologies that are specifically designed with the aim of improving brain function. Examples of neurotechnologies include brain training games such as BrainAge and programs like Fast ForWord, but also neurodevices used to monitor or regulate brain activity, such as deep brain stimulators (DBS), and smart prosthetics for the replacement of impaired sensory systems (i.e. cochlear or retinal implants).

Clearly, the vision of neurotechnology is not free of issues. The more they become powerful and sophisticated, the more attention should be dedicated to understand the socio-economic, legal and ethical implications of their applications in various field, from medicine to neuromarketing.


 

Jan 27, 2012

A combined robotic and cognitive training for locomotor rehabilitation

A combined robotic and cognitive training for locomotor rehabilitation: evidences of cerebral functional reorganization in two chronic traumatic brain injured patients.

Front Hum Neurosci. 2011;5:146

Authors: Sacco K, Cauda F, D'Agata F, Duca S, Zettin M, Virgilio R, Nascimbeni A, Belforte G, Eula G, Gastaldi L, Appendino S, Geminiani G

Abstract. It has been demonstrated that automated locomotor training can improve walking capabilities in spinal cord-injured subjects but its effectiveness on brain damaged patients has not been well established. A possible explanation of the discordant results on the efficacy of robotic training in patients with cerebral lesions could be that these patients, besides stimulation of physiological motor patterns through passive leg movements, also need to train the cognitive aspects of motor control. Indeed, another way to stimulate cerebral motor areas in paretic patients is to use the cognitive function of motor imagery. A promising possibility is thus to combine sensorimotor training with the use of motor imagery. The aim of this paper is to assess changes in brain activations after a combined sensorimotor and cognitive training for gait rehabilitation. The protocol consisted of the integrated use of a robotic gait orthosis prototype with locomotor imagery tasks. Assessment was conducted on two patients with chronic traumatic brain injury and major gait impairments, using functional magnetic resonance imaging. Physiatric functional scales were used to assess clinical outcomes. Results showed greater activation post-training in the sensorimotor and supplementary motor cortices, as well as enhanced functional connectivity within the motor network. Improvements in balance and, to a lesser extent, in gait outcomes were also found.

Jun 05, 2011

Human Computer Confluence

Human Computer Confluence (HC-CO) is an ambitious initiative recently launched by the European Commission under the Future and Emerging Technologies (FET) program, which fosters projects that investigate and demonstrate new possibilities “emerging at the confluence between the human and technological realms” (source: HC-CO website, EU Commission).

Such projects will examine new modalities for individual and group perception, actions and experience in augmented, virtual spaces. In particular, such virtual spaces would span the virtual reality continuum, also extending to purely synthetic but believable representation of massive, complex and dynamic data. HC-CO also fosters inter-disciplinary research (such as Presence, neuroscience, psychophysics, prosthetics, machine learning, computer science and engineering) towards delivering unified experiences and inventing radically new forms of perception/action.

HC-CO brings together ideas stemming from two series of Presence projects (the complete list is available here) with a vision of new forms of interaction and of new types of information spaces to interact with. It will develop the science and technologies necessary to ensure an effective, even transparent, bidirectional communication between humans and computers, which will in turn deliver a huge set of applications: from today's Presence concepts to new senses, to new perceptive capabilities dealing with more abstract information spaces to the social impact of such communication enabling technologies. Inevitably, these technologies question the notion of interface between the human and the technological realm, and thus, also in a fundamental way, put into question the nature of both.

The long-term implications can be profound and need to be considered from an ethical/societal point of view. HC-CO is, however, not a programme on human augmentation. It does not aim to create a super-human. The idea of confluence is to study what can be done by bringing new types of technologically enabled interaction modalities in between the human and a range of virtual (not necessarily naturalistic) realms. Its ambition is to bring our best understanding from human sciences into future and emerging technologies for a new and purposeful human computer symbiosis.

HC-CO is conceptually broken down into the following themes:

  • HC-CO Data. On-line perception and interaction with massive volumes of data: new methods to stimulate and use human sensory perception and cognition to interpret massive volumes of data in real time to enable assimilation, understanding and interaction with informational spaces. Research should find new ways to exploit human factors (sensory, perceptual and cognitive aspects), including the selection of the most effective sensory modalities, for data exploration. Although not explicitly mentioned, non-sensorial pathways, i.e., direct brain to computer and computer to brain communication could be explored.
  • HC-CO Transit. Unified experience, emerging from the unnoticeable transition from physical to augmented or virtual reality: new methods and concepts towards unobtrusive mixed or virtual reality environment (multi-modal displays, tracking systems, virtual representations...), and scenarios to support entirely unobtrusive interaction. Unobtrusiveness also applies to virtual representations, their dynamics, and the feedback received. Here the challenge is both technological and scientific, spanning human cognition, human machine interaction and machine intelligence disciplines.
  • HC-CO Sense. New forms of perception and action: invent and demonstrate new forms of interaction with the real world, virtual models or abstract information by provoking a mapping from an artificial medium to appropriate sensory modalities or brain regions. This research should reinforce data perception and unified experience by augmenting the human interaction capabilities and awareness in virtual spaces.

In sum, HC-CO is an emerging r&d field that holds the potential to revolutionize the way we interact with computers. Standing at the crossroad between cognitive science, computer science and artificial intelligence, HC-CO can provide the cyberpsychology and cybertherapy community with fresh concepts and interesting new tools to apply in both research and clinical domains.

More to explore:

  • HC-CO initiative: The official EU website the HC-CO initiative, which describes the broad objectives of this emerging research field. 
  • HC2 Project: The horizontal character of HC-CO makes it a fascinating and fertile interdisciplinary field, but it can also compromise its growth, with researchers scattered across disciplines and groups worldwide. For this reason a coordination activity promoting discipline connect, identity building and integration while defining future research, education and policy directions at the regional, national, European and international level has been created. This project is HC2, a three-year Coordination Action funded by the FP7 FET Proactive scheme. The consortium will draw on a wide network of researchers and stakeholders to achieve four key objectives: a) stimulate, structure and support the research community, promoting identity building; b) to consolidate research agendas with special attention to the interdisciplinary aspects of HC-CO; c) enhance the Public Understanding of HC-CO and foster the early contact of researchers with high-tech SMEs and other industry players; d) establish guidelines for the definition of new educational curricula to prepare the next generation of HC-CO researchers.
  • CEED Project: Funded by the HC-CO initiative, the Collective Experience of Empathic Data Systems (CEEDs) project aims to develop “novel, integrated technologies to support human experience, analysis and understanding of very large datasets”. CEEDS will develop innovative tools to exploit theories showing that discovery is the identification of patterns in complex data sets by the implicit information processing capabilities of the human brain. Implicit human responses will be identified by the CEEDs system’s analysis of its sensing systems, tuned to users’ bio-signals and non-verbal behaviours. By associating these implicit responses with different features of massive datasets, the CEEDs system will guide users’ discovery of patterns and meaning within the datasets.
  • VERE Project: VERE - Virtual Embodiment and Robotic Re-Embodiment – is another large project funded by the HC-CO initiative, which aims at “dissolving the boundary between the human body and surrogate representations in immersive virtual reality and physical reality”. Dissolving the boundary means that people have the illusion that their surrogate representation is their own body, and act and have thoughts that correspond to this. The work in VERE may be thought of as applied presence research and applied cognitive neuroscience.

May 21, 2011

Brain-controlled bionic hand for ‘elective amputation’ patient

Source: BBC News — May 18, 2011

An Austrian man has voluntarily had his hand amputated so he can be fitted with a bionic hand, which will be controlled by nerve signals in his own arm. The bionic hands, manufactured by the German prosthetics company Otto Bock, can pinch and grasp in response to signals from the brain. The wrist of the prosthesis can be rotated manually using the patient’s other functioning hand.

The patient will control the hand using the same brain signals that previously powered similar movements in the real hand and that will now be picked up by two sensors placed over the skin above nerves in the forearm.

Pioneering epidural treatment helps paraplegic man stand

A team of scientists at the University of Louisville, UCLA and the California Institute of Technology has developed a new treatment involving continual direct electrical stimulation of the spinal cord. The treatment was successfully tested on a 25-years-old paraplegic man, Rob Summers, who was completely paralysed below the chest in a car accident. The stimulation enabled the man to achieve full weight-bearing standing with assistance provided only for balance for 4·25 min. These breakthrough findings were reported May 20 in the Lancet (early online publication).

 

Oct 17, 2010

Growing neurons on silicon chips

Via Robots.net

Researchers at University of Calgary have developed neurochips capable of capable of interfacing to and sensing activity of biological neurons in very high resolution. The new chips are automated so it's now easy to connect multiple brain cells eliminating the years of training it once required. While researchers say this technology could be used for new diagnostic methods and treatments for a variety of neuro-degenerative diseases, this advancement could ultimately lead to the use of biological neurons in the central or sub-processing units of computers and automated machinery.

 

 

Jul 30, 2010

Sniff-activated sensor may return active lifestyles to paralyzed and disabled

Disabled persons, quadriplegics and others suffering from paralysis may be able to regain movement with a sniff-activated sensor, according to a study by Israeli researchers.

The technology works by translating changes in nasal air pressure into electrical signals that are passed to a computer. Patients can sniff in certain patterns to select letters or numbers to compose text, or on the computer, to control the mouse. For getting around, sniffing controls the direction of the wheelchair, Bloomberg reports.

Quadriplegic patients were able to use the device to navigate wheelchairs as well as healthy people. Two participants who were completely paralyzed but with intact mental function used the technology to communicate by choosing letters on a computer screen to write. The study appears in the Proceedings of the National Academy of Sciences.

Full Story