Oct 18, 2014

New Technique Helps Diagnose Consciousness in Locked-in Patients

Via Medgadget

locked in detection New Technique Helps Diagnose Consciousness in Locked in Patients

Brain networks in two behaviourally-similar vegetative patients (left and middle), but one of whom imagined playing tennis (middle panel), alongside a healthy adult (right panel). Credit: Srivas Chennu

People locked into a vegetative state due to disease or injury are a major mystery for medical science. Some may be fully unconscious, while others remain aware of what’s going on around them but can’t speak or move to show it. Now scientists at Cambridge have reported in journal PLOS Computational Biology on a new technique that can help identify locked-in people that can still hear and retain their consciousness.

Some details from the study abstract:

We devised a novel topographical metric, termed modular span, which showed that the alpha network modules in patients were also spatially circumscribed, lacking the structured long-distance interactions commonly observed in the healthy controls. Importantly however, these differences between graph-theoretic metrics were partially reversed in delta and theta band networks, which were also significantly more similar to each other in patients than controls. Going further, we found that metrics of alpha network efficiency also correlated with the degree of behavioural awareness. Intriguingly, some patients in behaviourally unresponsive vegetative states who demonstrated evidence of covert awareness with functional neuroimaging stood out from this trend: they had alpha networks that were remarkably well preserved and similar to those observed in the controls. Taken together, our findings inform current understanding of disorders of consciousness by highlighting the distinctive brain networks that characterise them. In the significant minority of vegetative patients who follow commands in neuroimaging tests, they point to putative network mechanisms that could support cognitive function and consciousness despite profound behavioural impairment.

Study in PLOS Computational Biology: Spectral Signatures of Reorganised Brain Networks in Disorders of Consciousness

 

Oct 17, 2014

Leia Display System - promo video HD short version

www.leiadisplay.com
www.facebook.com/LeiaDisplaySystem

Oct 06, 2014

Is the metaverse still alive?

In the last decade, online virtual worlds such as Second Life and alike have become enormously popular. Since their appearance on the technology landscape, many analysts regarded shared 3D virtual spaces as a disruptive innovation, which would have rendered the Web itself obsolete.

This high expectation attracted significant investments from large corporations such as IBM, which started building their virtual spaces and offices in the metaverse. Then, when it became clear that these promises would not be kept, disillusionment set in and virtual worlds started losing their edge. However, this is not a new phenomenon in high-tech, happening over and over again.

The US consulting company Gartner has developed a very popular model to describe this effect, called the “Hype Cycle”. The Hype Cycle provides a graphic representation of the maturity and adoption of technologies and applications.

It consists of five phases, which show how emerging technologies will evolve.

In the first, “technology trigger” phase, a new technology is launched which attracts the interest of media. This is followed by the “peak of inflated expectations”, characterized by a proliferation of positive articles and comments, which generate overexpectations among users and stakeholders.

In the next, “trough of disillusionment” phase, these exaggerated expectations are not fulfilled, resulting in a growing number of negative comments generally followed by a progressive indifference.

In the “slope of enlightenment” the technology potential for further applications becomes more broadly understood and an increasing number of companies start using it.

In the final, “plateau of productivity” stage, the emerging technology established itself as an effective tool and mainstream adoption takes off. 

So what stage in the hype cycle are virtual worlds now?

After the 2006-2007 peak, metaverses entered the downward phase of the hype cycle, progressively loosing media interest, investments and users. Many high-tech analysts still consider this decline an irreversible process.

However, the negative outlook that headed shared virtual worlds into the trough of disillusionment maybe soon reversed. This is thanks to the new interest in virtual reality raised by the Oculus Rift (recently acquired by Facebook for $2 billion), Sony’s Project Morpheus and alike immersive displays, which are still at the takeoff stage in the hype cycle.

Oculus Rift's chief scientist Michael Abrash makes no mystery of the fact that his main ambition has always been to build a metaverse such the one described in Neal Stephenson's (1992) cyberpunk novel Snow Crash. As he writes on the Oculus blog

"Sometime in 1993 or 1994, I read Snow Crash and for the first time thought something like the Metaverse might be possible in my lifetime."

Furthermore, despite the negative comments and deluded expectations, the metaverse keeps attracting new users: in its 10th anniversary on June 23rd 2013, an infographic reported that Second Life had over 1 million users visit around the world monthly, more than 400,000 new accounts per month, and 36 million registered users.

So will Michael Abrash’s metaverse dream come true? Even if one looks into the crystal ball of the hype cycle, the answer is not easily found.

First direct brain-to-brain communication between human subjects

Via KurzweilAI.net

An international team of neuroscientists and robotics engineers have demonstrated the first direct remote brain-to-brain communication between two humans located 5,000 miles away from each other and communicating via the Internet, as reported in a paper recently published in PLOS ONE (open access).

Emitter and receiver subjects with non-invasive devices supporting, respectively, a brain-computer interface (BCI), based on EEG changes, driven by motor imagery (left) and a computer-brain interface (CBI) based on the reception of phosphenes elicited by neuro-navigated TMS (right) (credit: Carles Grau et al./PLoS ONE)

In India, researchers encoded two words (“hola” and “ciao”) as binary strings and presented them as a series of cues on a computer monitor. They recorded the subject’s EEG signals as the subject was instructed to think about moving his feet (binary 0) or hands (binary 1). They then sent the recorded series of binary values in an email message to researchers in France, 5,000 miles away.

There, the binary strings were converted into a series of transcranial magnetic stimulation (TMS) pulses applied to a hotspot location in the right visual occipital cortex that either produced a phosphene (perceived flash of light) or not.

“We wanted to find out if one could communicate directly between two people by reading out the brain activity from one person and injecting brain activity into the second person, and do so across great physical distances by leveraging existing communication pathways,” explains coauthor Alvaro Pascual-Leone, MD, PhD, Director of the Berenson-Allen Center for Noninvasive Brain Stimulation at Beth Israel Deaconess Medical Center (BIDMC) and Professor of Neurology at Harvard Medical School.

 A team of researchers from Starlab Barcelona, Spain and Axilum Robotics, Strasbourg, France conducted the experiment. A second similar experiment was conducted between individuals in Spain and France.

“We believe these experiments represent an important first step in exploring the feasibility of complementing or bypassing traditional language-based or other motor/PNS mediated means in interpersonal communication,” the researchers say in the paper.

“Although certainly limited in nature (e.g., the bit rates achieved in our experiments were modest even by current BCI (brain-computer interface) standards, mostly due to the dynamics of the precise CBI (computer-brain interface) implementation, these initial results suggest new research directions, including the non-invasive direct transmission of emotions and feelings or the possibility of sense synthesis in humans — that is, the direct interface of arbitrary sensors with the human brain using brain stimulation, as previously demonstrated in animals with invasive methods.

Brain-to-brain (B2B) communication system overview. On the left, the BCI subsystem is shown schematically, including electrodes over the motor cortex and the EEG amplifier/transmitter wireless box in the cap. Motor imagery of the feet codes the bit value 0, of the hands codes bit value 1. On the right, the CBI system is illustrated, highlighting the role of coil orientation for encoding the two bit values. Communication between the BCI and CBI components is mediated by the Internet. (Credit: Carles Grau et al./PLoS ONE)

“The proposed technology could be extended to support a bi-directional dialogue between two or more mind/brains (namely, by the integration of EEG and TMS systems in each subject). In addition, we speculate that future research could explore the use of closed mind-loops in which information associated to voluntary activity from a brain area or network is captured and, after adequate external processing, used to control other brain elements in the same subject. This approach could lead to conscious synthetically mediated modulation of phenomena best detected subjectively by the subject, including emotions, pain and psychotic, depressive or obsessive-compulsive thoughts.

“Finally, we anticipate that computers in the not-so-distant future will interact directly with the human brain in a fluent manner, supporting both computer- and brain-to-brain communication routinely. The widespread use of human brain-to-brain technologically mediated communication will create novel possibilities for human interrelation with broad social implications that will require new ethical and legislative responses.”

This work was partly supported by the EU FP7 FET Open HIVE project, the Starlab Kolmogorov project, and the Neurology Department of the Hospital de Bellvitge.

 

Sep 25, 2014

With Cyberith's Virtualizer, you can run around wearing an Oculus Rift

Sep 21, 2014

First hands-on: Crescent Bay demo

I just tested the Oculus Crescent Bay prototype at the Oculus Connect event in LA.

I still can't close my mouth.

The demo lasted about 10 min, during which several scenes were presented. The resolution and framerate are astounding, you can turn completely around. I can say this is the first time in my life I can really say I was there.

I believe this is really the begin of a new era for VR, and I am sure I won't sleep tonight thinking about the infinite possibilities and applications of this technology. and I don't think I am exaggerating - if anything, I am underestimating

 

 

Aug 03, 2014

Fly like a Birdly

Birdly is a full body, fully immersive, Virtual Reality flight simulator developed at the Zurich University of the Arts (ZHdK). With Birdly, you can embody an avian creature, the Red Kite, visualized through Oculus Rift, as it soars over the 3D virtual city of San Francisco, heightened by sonic, olfactory, and wind feedback.

Jul 09, 2014

Experiential Virtual Scenarios With Real-Time Monitoring (Interreality) for the Management of Psychological Stress: A Block Randomized Controlled Trial

Gaggioli, A., Pallavicini, F., Morganti, L. et al. (2014) Journal of Medical Internet Research. 16(7):e167. DOI: 10.2196/jmir.3235

The recent convergence between technology and medicine is offering innovative methods and tools for behavioral health care. Among these, an emerging approach is the use of virtual reality (VR) within exposure-based protocols for anxiety disorders, and in particular posttraumatic stress disorder. However, no systematically tested VR protocols are available for the management of psychological stress. Objective: Our goal was to evaluate the efficacy of a new technological paradigm, Interreality, for the management and prevention of psychological stress. The main feature of Interreality is a twofold link between the virtual and the real world achieved through experiential virtual scenarios (fully controlled by the therapist, used to learn coping skills and improve self-efficacy) with real-time monitoring and support (identifying critical situations and assessing clinical change) using advanced technologies (virtual worlds, wearable biosensors, and smartphones).

Full text paper available at: http://www.jmir.org/2014/7/e167/

Jun 30, 2014

Never do a Tango with an Eskimo

Apr 15, 2014

Avegant - Glyph Kickstarter - Wearable Retinal Display

Via Mashable

Move over Google Glass and Oculus Rift, there's a new kid on the block: Glyph, a mobile, personal theater.

Glyph looks like a normal headset and operates like one, too. That is, until you move the headband down over your eyes and it becomes a fully-functional visual visor that displays movies, television shows, video games or any other media connected via the attached HDMI cable.

Using Virtual Retinal Display (VRD), a technology that mimics the way we see light, the Glyph projects images directly onto your retina using one million micromirrors in each eye piece. These micromirrors reflect the images back to the retina, producing a reportedly crisp and vivid quality.

Apr 06, 2014

Measuring the effects through time of the influence of visuomotor and visuotactile synchronous stimulation on a virtual body ownership illusion

Measuring the effects through time of the influence of visuomotor and visuotactile synchronous stimulation on a virtual body ownership illusion.

Perception. 2014;43(1):43-58

Authors: Kokkinara E, Slater M

Abstract. Previous studies have examined the experience of owning a virtual surrogate body or body part through specific combinations of cross-modal multisensory stimulation. Both visuomotor (VM) and visuotactile (VT) synchronous stimulation have been shown to be important for inducing a body ownership illusion, each tested separately or both in combination. In this study we compared the relative importance of these two cross-modal correlations, when both are provided in the same immersive virtual reality setup and the same experiment. We systematically manipulated VT and VM contingencies in order to assess their relative role and mutual interaction. Moreover, we present a new method for measuring the induced body ownership illusion through time, by recording reports of breaks in the illusion of ownership ('breaks') throughout the experimental phase. The balance of the evidence, from both questionnaires and analysis of the breaks, suggests that while VM synchronous stimulation contributes the greatest to the attainment of the illusion, a disruption of either (through asynchronous stimulation) contributes equally to the probability of a break in the illusion.

Mar 02, 2014

Voluntary Out-of-Body Experience: An fMRI Study

Voluntary Out-of-Body Experience: An fMRI Study.

Front Hum Neurosci. 2014;8:70

Authors: Smith AM, Messier C

Abstract
The present single-case study examined functional brain imaging patterns in a participant that reported being able, at will, to produce somatosensory sensations that are experienced as her body moving outside the boundaries of her physical body all the while remaining aware of her unmoving physical body. We found that the brain functional changes associated with the reported extra-corporeal experience (ECE) were different than those observed in motor imagery. Activations were mainly left-sided and involved the left supplementary motor area and supramarginal and posterior superior temporal gyri, the last two overlapping with the temporal parietal junction that has been associated with out-of-body experiences. The cerebellum also showed activation that is consistent with the participant's report of the impression of movement during the ECE. There was also left middle and superior orbital frontal gyri activity, regions often associated with action monitoring. The results suggest that the ECE reported here represents an unusual type of kinesthetic imagery.

Humanlike robot hands controlled by brain activity arouse illusion of ownership in operators.

Humanlike robot hands controlled by brain activity arouse illusion of ownership in operators.

Sci Rep. 2013;3:2396

Authors: Alimardani M, Nishio S, Ishiguro H

Abstract
Operators of a pair of robotic hands report ownership for those hands when they hold image of a grasp motion and watch the robot perform it. We present a novel body ownership illusion that is induced by merely watching and controlling robot's motions through a brain machine interface. In past studies, body ownership illusions were induced by correlation of such sensory inputs as vision, touch and proprioception. However, in the presented illusion none of the mentioned sensations are integrated except vision. Our results show that during BMI-operation of robotic hands, the interaction between motor commands and visual feedback of the intended motions is adequate to incorporate the non-body limbs into one's own body. Our discussion focuses on the role of proprioceptive information in the mechanism of agency-driven illusions. We believe that our findings will contribute to improvement of tele-presence systems in which operators incorporate BMI-operated robots into their body representations.

Feb 09, 2014

The importance of synchrony and temporal order of visual and tactile input for illusory limb ownership experiences - an FMRI study applying virtual reality

The importance of synchrony and temporal order of visual and tactile input for illusory limb ownership experiences - an FMRI study applying virtual reality.

PLoS One. 2014;9(1):e87013

Authors: Bekrater-Bodmann R, Foell J, Diers M, Kamping S, Rance M, Kirsch P, Trojan J, Fuchs X, Bach F, Cakmak HK, Maaß H, Flor H

Abstract. In the so-called rubber hand illusion, synchronous visuotactile stimulation of a visible rubber hand together with one's own hidden hand elicits ownership experiences for the artificial limb. Recently, advanced virtual reality setups were developed to induce a virtual hand illusion (VHI). Here, we present functional imaging data from a sample of 25 healthy participants using a new device to induce the VHI in the environment of a magnetic resonance imaging (MRI) system. In order to evaluate the neuronal robustness of the illusion, we varied the degree of synchrony between visual and tactile events in five steps: in two conditions, the tactile stimulation was applied prior to visual stimulation (asynchrony of -300 ms or -600 ms), whereas in another two conditions, the tactile stimulation was applied after visual stimulation (asynchrony of +300 ms or +600 ms). In the fifth condition, tactile and visual stimulation was applied synchronously. On a subjective level, the VHI was successfully induced by synchronous visuotactile stimulation. Asynchronies between visual and tactile input of ±300 ms did not significantly diminish the vividness of illusion, whereas asynchronies of ±600 ms did. The temporal order of visual and tactile stimulation had no effect on VHI vividness. Conjunction analyses of functional MRI data across all conditions revealed significant activation in bilateral ventral premotor cortex (PMv). Further characteristic activation patterns included bilateral activity in the motion-sensitive medial superior temporal area as well as in the bilateral Rolandic operculum, suggesting their involvement in the processing of bodily awareness through the integration of visual and tactile events. A comparison of the VHI-inducing conditions with asynchronous control conditions of ±600 ms yielded significant PMv activity only contralateral to the stimulation site. These results underline the temporal limits of the induction of limb ownership related to multisensory body-related input.

Feb 02, 2014

Activation of the human mirror neuron system during the observation of the manipulation of virtual tools in the absence of a visible effector limb

Activation of the human mirror neuron system during the observation of the manipulation of virtual tools in the absence of a visible effector limb.

Neurosci Lett. 2013 Oct 25;555:220-4

Authors: Modroño C, Navarrete G, Rodríguez-Hernández AF, González-Mora JL

Abstract. This work explores the mirror neuron system activity produced by the observation of virtual tool manipulations in the absence of a visible effector limb. Functional MRI data was obtained from healthy right-handed participants who manipulated a virtual paddle in the context of a digital game and watched replays of their actions. The results show how action observation produced extended bilateral activations in the parietofrontal mirror neuron system. At the same time, three regions in the left hemisphere (in the primary motor and the primary somatosensory cortex, the supplementary motor area and the dorsolateral prefrontal cortex) showed a reduced BOLD, possibly related with the prevention of inappropriate motor execution. These results can be of interest for researchers and developers working in the field of action observation neurorehabilitation.

Jan 25, 2014

MemoryMirror: First Body-Controlled Smart Mirror

The Intel® Core™ i7-based MemoryMirror takes the clothes shopping experience to a whole different level, allowing shoppers to try on multiple outfits, then virtually view and compare previous choices on the mirror itself using intuitive hand gestures. Users control all their data and can remain anonymous to the retailer if they so choose. The Memory Mirror uses Intel integrated graphics technology to create avatars of the shopper wearing various clothing that can be shared with friends to solicit feedback or viewed instantly to make an immediate in-store purchase. Shoppers can also save their looks in mobile app should they decide to purchase at a later time online.

Jan 21, 2014

The Oculus Rift 'Crystal Cove' prototype is 2014's Best of CES winner

Dec 19, 2013

Tricking the brain with transformative virtual reality

Nov 20, 2013

inFORM

inFORM is a Dynamic Shape Display developed by MIT Tangible Media Group that can render 3D content physically, so users can interact with digital information in a tangible way.

inFORM can also interact with the physical world around it, for example moving objects on the table’s surface.

Remote participants in a video conference can be displayed physically, allowing for a strong sense of presence and the ability to interact physically at a distance.

Nov 16, 2013

Monkeys Control Avatar’s Arms Through Brain-Machine Interface

Via Medgadget

Researchers at Duke University have reported in journal Science Translational Medicine that they were able to train monkeys to control two virtual limbs through a brain-computer interface (BCI). The rhesus monkeys initially used joysticks to become comfortable moving the avatar’s arms, but later the brain-computer interfaces implanted on their brains were activated to allow the monkeys to drive the avatar using only their minds. Two years ago the same team was able to train monkeys to control one arm, but the complexity of controlling two arms required the development of a new algorithm for reading and filtering the signals. Moreover, the monkey brains themselves showed great adaptation to the training with the BCI, building new neural pathways to help improve how the monkeys moved the virtual arms. As the authors of the study note in the abstract, “These findings should help in the design of more sophisticated BMIs capable of enabling bimanual motor control in human patients.”

Here’s a video of one of the avatars being controlled to tap on the white balls:

1 2 3 4 5 6 Next