Oct 12, 2014
From New Scientist
The latest prototype of virtual reality headset Oculus Rift allows you to step right into the movies you watch and the games you play
AN OLD man sits across the fire from me, telling a story. An inky dome of star-flecked sky arcs overhead as his words mingle with the crackling of the flames. I am entranced.
This isn't really happening, but it feels as if it is. This is a program called Storyteller – Fireside Tales that runs on the latest version of the Oculus Rift headset, unveiled last month. The audiobook software harnesses the headset's virtual reality capabilities to deepen immersion in the story. (See also "Plot bots")
Fireside Tales is just one example of a new kind of entertainment that delivers convincing true-to-life experiences. Soon films will get a similar treatment.
Movie company 8i, based in Wellington, New Zealand, plans to make films specifically for Oculus Rift. These will be more immersive than just mimicking a real screen in virtual reality because viewers will be able to step inside and explore the movie while they are watching it.
"We are able to synthesise photorealistic views in real-time from positions and directions that were not directly captured," says Eugene d'Eon, chief scientist at 8i. "[Viewers] can not only look around a recorded experience, but also walk or fly. You can re-watch something you love from many different perspectives."
The latest generation of games for Oculus Rift are more innovative, too. Black Hat Oculus is a two-player, cooperative game designed by Mark Sullivan and Adalberto Garza, both graduates of MIT's Game Lab. One headset is for the spy, sneaking through guarded buildings on missions where detection means death. The other player is the overseer, with a God-like view of the world, warning the spy of hidden traps, guards and passageways.
Deep immersion is now possible because the latest Oculus Rift prototype – known as Crescent Bay – finally delivers full positional tracking. This means the images that you see in the headset move in sync with your own movements.
This is the key to unlocking the potential of virtual reality, says Hannes Kaufmann at the Technical University of Vienna in Austria. The headset's high-definition display and wraparound field of view are nice additions, he says, but they aren't essential.
The next step, says Kaufmann is to allow people to see their own virtual limbs, not just empty space, in the places where their brain expects them to be. That's why Beijing-based motion capture company Perception, which raised more than $500,000 on Kickstarter in September, is working on a full body suit that gathers body pose estimation and gives haptic feedback – a sense of touch – to the wearer. Software like Fireside Tales will then be able to take your body position into account.
In the future, humans will be able to direct live virtual experiences themselves, says Kaufmann. "Imagine you're meeting an alien in the virtual reality, and you want to shake hands. You could have a real person go in there and shake hands with you, but for you only the alien is present."
Oculus, which was bought by Facebook in July for $2 billion, has not yet announced when the headset will be available to buy.
This article appeared in print under the headline "Deep and meaningful"
Oct 06, 2014
Bionic Vision Australia, a collaboration between of researchers working on a bionic eye, has announced that its prototype implant has completed a two year trial in patients with advanced retinitis pigmentosa. Three patients with profound vision loss received 24-channel suprachoroidal electrode implants that caused no noticeable serious side effects. Moreover, though this was not formally part of the study, the patients were able to see more light and able to distinguish shapes that were invisible to them prior to implantation. The newly gained vision allowed them to improve how they navigated around objects and how well they were able to spot items on a tabletop.
The next step is to try out the latest 44-channel device in a clinical trial slated for next year and then move on to a 98-channel system that is currently in development.
This study is critically important to the continuation of our research efforts and the results exceeded all our expectations,” Professor Mark Hargreaves, Chair of the BVA board, said in a statement. “We have demonstrated clearly that our suprachoroidal implants are safe to insert surgically and cause no adverse events once in place. Significantly, we have also been able to observe that our device prototype was able to evoke meaningful visual perception in patients with profound visual loss.”
Here’s one of the study participants using the bionic eye:
An international team of neuroscientists and robotics engineers have demonstrated the first direct remote brain-to-brain communication between two humans located 5,000 miles away from each other and communicating via the Internet, as reported in a paper recently published in PLOS ONE (open access).
Emitter and receiver subjects with non-invasive devices supporting, respectively, a brain-computer interface (BCI), based on EEG changes, driven by motor imagery (left) and a computer-brain interface (CBI) based on the reception of phosphenes elicited by neuro-navigated TMS (right) (credit: Carles Grau et al./PLoS ONE)
In India, researchers encoded two words (“hola” and “ciao”) as binary strings and presented them as a series of cues on a computer monitor. They recorded the subject’s EEG signals as the subject was instructed to think about moving his feet (binary 0) or hands (binary 1). They then sent the recorded series of binary values in an email message to researchers in France, 5,000 miles away.
There, the binary strings were converted into a series of transcranial magnetic stimulation (TMS) pulses applied to a hotspot location in the right visual occipital cortex that either produced a phosphene (perceived flash of light) or not.
“We wanted to find out if one could communicate directly between two people by reading out the brain activity from one person and injecting brain activity into the second person, and do so across great physical distances by leveraging existing communication pathways,” explains coauthor Alvaro Pascual-Leone, MD, PhD, Director of the Berenson-Allen Center for Noninvasive Brain Stimulation at Beth Israel Deaconess Medical Center (BIDMC) and Professor of Neurology at Harvard Medical School.
A team of researchers from Starlab Barcelona, Spain and Axilum Robotics, Strasbourg, France conducted the experiment. A second similar experiment was conducted between individuals in Spain and France.
“We believe these experiments represent an important first step in exploring the feasibility of complementing or bypassing traditional language-based or other motor/PNS mediated means in interpersonal communication,” the researchers say in the paper.
“Although certainly limited in nature (e.g., the bit rates achieved in our experiments were modest even by current BCI (brain-computer interface) standards, mostly due to the dynamics of the precise CBI (computer-brain interface) implementation, these initial results suggest new research directions, including the non-invasive direct transmission of emotions and feelings or the possibility of sense synthesis in humans — that is, the direct interface of arbitrary sensors with the human brain using brain stimulation, as previously demonstrated in animals with invasive methods.
Brain-to-brain (B2B) communication system overview. On the left, the BCI subsystem is shown schematically, including electrodes over the motor cortex and the EEG amplifier/transmitter wireless box in the cap. Motor imagery of the feet codes the bit value 0, of the hands codes bit value 1. On the right, the CBI system is illustrated, highlighting the role of coil orientation for encoding the two bit values. Communication between the BCI and CBI components is mediated by the Internet. (Credit: Carles Grau et al./PLoS ONE)
“The proposed technology could be extended to support a bi-directional dialogue between two or more mind/brains (namely, by the integration of EEG and TMS systems in each subject). In addition, we speculate that future research could explore the use of closed mind-loops in which information associated to voluntary activity from a brain area or network is captured and, after adequate external processing, used to control other brain elements in the same subject. This approach could lead to conscious synthetically mediated modulation of phenomena best detected subjectively by the subject, including emotions, pain and psychotic, depressive or obsessive-compulsive thoughts.
“Finally, we anticipate that computers in the not-so-distant future will interact directly with the human brain in a fluent manner, supporting both computer- and brain-to-brain communication routinely. The widespread use of human brain-to-brain technologically mediated communication will create novel possibilities for human interrelation with broad social implications that will require new ethical and legislative responses.”
This work was partly supported by the EU FP7 FET Open HIVE project, the Starlab Kolmogorov project, and the Neurology Department of the Hospital de Bellvitge.
Georgia Institute of Technology researchers have created a speech-to-text Android app for Google Glass that displays captions for hard-of-hearing persons when someone is talking to them in person.
“This system allows wearers like me to focus on the speaker’s lips and facial gestures, “said School of Interactive Computing Professor Jim Foley.
“If hard-of-hearing people understand the speech, the conversation can continue immediately without waiting for the caption. However, if I miss a word, I can glance at the transcription, get the word or two I need and get back into the conversation.”
Captioning on Glass display captions for the hard-of-hearing (credit: Georgia Tech)
Foley and the students are working with the Association of Late Deafened Adults in Atlanta to improve the program. An iPhone app is planned.
Sep 21, 2014
I still can't close my mouth.
The demo lasted about 10 min, during which several scenes were presented. The resolution and framerate are astounding, you can turn completely around. I can say this is the first time in my life I can really say I was there.
I believe this is really the begin of a new era for VR, and I am sure I won't sleep tonight thinking about the infinite possibilities and applications of this technology. and I don't think I am exaggerating - if anything, I am underestimating
Jul 29, 2014
Sidewalk collisions involving pedestrians engrossed in their electronic devices have become an irritating (and sometimes dangerous) fact of city life. To prevent them, what about just creating a no cellphones lane on the sidewalk? Would people follow the signs? Thats what a TV crew decided to find out on a Washington, D.C., street Thursday, as part of a behavioral science experiment for a new National Geographic TV series. [via Quartz]
As expected, some pedestrians ignored the chalk markings designating a no-cellphones lane and a lane that warned pedestrians to walk at your own risk. Others didnt even see them because they were too busy staring at their phones. But others stopped, took pictures and posted themfrom their phones, of course.
MindRDR connects Google Glass with a device to monitor brain activity, allowing users to take pictures and socialise them on Twitter or Facebook.
Once a user has decided to share an image, we analyse their brain data and provide an evaluation of their ability to control the interface with their mind. This information is attached to every shared image.
The current version of MindRDR uses commercially available brain monitor Neurosky MindWave Mobile to extract core metrics from the mind.
Ekso is an exoskeleton bionic suit or a "wearable robot" designed to enable individuals with lower extremity paralysis to stand up and walk over ground with a weight bearing, four point reciprocal gait. Walking is achieved by the user’s forward lateral weight shift to initiate a step. Battery-powered motors drive the legs and replace neuromuscular function.
Ekso Bionics http://eksobionics.com/
Jul 09, 2014
Experiential Virtual Scenarios With Real-Time Monitoring (Interreality) for the Management of Psychological Stress: A Block Randomized Controlled Trial
The recent convergence between technology and medicine is offering innovative methods and tools for behavioral health care. Among these, an emerging approach is the use of virtual reality (VR) within exposure-based protocols for anxiety disorders, and in particular posttraumatic stress disorder. However, no systematically tested VR protocols are available for the management of psychological stress. Objective: Our goal was to evaluate the efficacy of a new technological paradigm, Interreality, for the management and prevention of psychological stress. The main feature of Interreality is a twofold link between the virtual and the real world achieved through experiential virtual scenarios (fully controlled by the therapist, used to learn coping skills and improve self-efficacy) with real-time monitoring and support (identifying critical situations and assessing clinical change) using advanced technologies (virtual worlds, wearable biosensors, and smartphones).
Full text paper available at: http://www.jmir.org/2014/7/e167/
Jun 30, 2014
Apr 29, 2014
Actually, according to my experience, citizens and public stakeholders are not well-informed or educated about mHealth. For example, to many people the idea of using phones to deliver mental health programs still sounds weird.
Yet the number of mental health apps is rapidly growing: a recent survey identified 200 unique mobile tools specifically associated with behavioral health.
These applications now cover a wide array of clinical areas including developmental disorders, cognitive disorders, substance-related disorders, as well as psychotic and mood disorders.
I think that the increasing "applification" of mental health is explained by three potential benefits of this approach:
- First, mobile apps can be integrated in different stages of treatment: from promoting awareness of disease, to increasing treatment compliance, to preventing relapse.
- Furthermore, mobile tools can be used to monitor behavioural and psychological symptoms in everyday life: self-reported data can be complemented with readings from inbuilt or wearable sensors to fine-tune treatment according to the individual patient’s needs.
- Last - but not least - mobile applications can help patients to stay on top of current research, facilitating access to evidence-based care. For example, in the EC-funded INTERSTRESS project, we investigated these potentials in the assessment and management of psychological stress, by developing different mobile applications (including the award-winning Positive Technology app) for helping people to monitor stress levels “on the go” and learn new relaxation skills.
In short, I believe that mental mHealth has the potential to provide the right care, at the right time, at the right place. However, from my personal experience I have identified three key challenges that must be faced in order to realize the potential of this approach.
I call them the three "nEEEds" of mental mHealth: evidence, engagement, enactment.
- Evidence refers to the need of clinical proof of efficacy or effectiveness to be provided using randomised trials.
- Engagement is related to the need of ensuring usability and accessibility for mobile interfaces: this goes beyond reducing use errors that may generate risks of psychological discomfort for the patient, to include the creation of a compelling and engaging user experience.
- Finally, enactment concerns the need that appropriate regulations enacted by competent authorities catch up with mHealth technology development.
Being myself a beneficiary of EC-funded grants, I can recognize that R&D investments on mHealth made by EC across FP6 and FP7 have contributed to position Europe at the forefront of this revolution. And the return of this investment could be strong: it has been predicted that full exploitation of mHealth solutions could lead to nearly 100 billion EUR savings in total annual EU healthcare spend in 2017.
I believe that a progressively larger portion of these savings may be generated by the adoption of mobile solutions in the mental health sector: actually, in the WHO European Region, mental ill health accounts for almost 20% of the burden of disease.
For this prediction to be fulfilled, however, many barriers must be overcome: thethree "nEEEds" of mental mHealth are probably only the start of the list. Hopefully, the Green Paper consultation will help to identify further opportunities and concerns that may be facing mental mHealth, in order to ensure a successful implementation of this approach.
Apr 15, 2014
Android Wear will show you info from the wide variety of Android apps, such as messages, social apps, chats, notifications, health and fitness, music playlists, and videos.
It will also enable Google Now functions — say “OK, Google” for flight times, sending a text, weather, view email, get directions, travel time, making a reservation, etc..
Google says it’s working with several other consumer-electronics manufacturers, including Asus, HTC, and Samsung; chip makers Broadcom, Imagination, Intel, Mediatek and Qualcomm; and fashion brands like the Fossil Group to offer watches powered by Android Wear later this year.
If you’re a developer, there’s a new section on developer.android.com/wear focused on wearables. Starting today, you can download a Developer Preview so you can tailor your existing app notifications for watches powered by Android Wear.
Glyph looks like a normal headset and operates like one, too. That is, until you move the headband down over your eyes and it becomes a fully-functional visual visor that displays movies, television shows, video games or any other media connected via the attached HDMI cable.
Using Virtual Retinal Display (VRD), a technology that mimics the way we see light, the Glyph projects images directly onto your retina using one million micromirrors in each eye piece. These micromirrors reflect the images back to the retina, producing a reportedly crisp and vivid quality.
Apr 06, 2014
Researchers at at John A. Rogers’ lab at the University of Illinois, Urbana-Champaign have incorporated off-the-shelf chips into fexible electronic patches to allow for high quality ECG and EEG monitoring.
Here is the video:
Mar 02, 2014
Reblogged from Medgadget
People unfortunate enough to lose an arm or a leg often feel pain in their missing limb, an unexplained condition known as phantom limb pain. Researchers at Chalmers University of Technology in Sweden decided to test whether they can fool the brain into believing the limb is still there and maybe stop the pain.
They attached electrodes to the skin of the remaining arm of an amputee to read the myoelectric signals from the muscles below. Additionally, the arm was tracked in 3D using a marker so that the data could be integrated into a moving generated avatar as well as computer games. The amputee moves the arm of the avatar like he would if his own still existed, while the brain becomes reacquainted with its presence. After repeated use, and playing video games that were controlled using the same myoelectric interface, the person in the study had significant pain reduction after decades of phantom limb pain.
Here’s a video showing off the experimental setup:
Feb 11, 2014
Keio University scientists have developed a “neurocam” — a wearable camera system that detects emotions, based on an analysis of the user’s brainwaves.
The hardware is a combination of Neurosky’s Mind Wave Mobile and a customized brainwave sensor.
The users interests are quantified on a range of 0 to 100. The camera automatically records five-second clips of scenes when the interest value exceeds 60, with timestamp and location, and can be replayed later and shared socially on Facebook.
The researchers plan to make the device smaller, more comfortable, and fashionable to wear.
Feb 02, 2014
An improved assistive technology system for the blind that uses sonification (visualization using sounds) has been developed by Universidad Carlos III de Madrid (UC3M) researchers, with the goal of replacing costly, bulky current systems.
Called Assistive Technology for Autonomous Displacement (ATAD), the system includes a stereo vision processor measures the difference of images captured by two cameras that are placed slightly apart (for image depth data) and calculates the distance to each point in the scene.
Then it transmits the information to the user by means of a sound code that gives information regarding the position and distance to the different obstacles, using a small audio stereo amplifier and bone-conduction headphones.
Jan 25, 2014
The Intel® Core™ i7-based MemoryMirror takes the clothes shopping experience to a whole different level, allowing shoppers to try on multiple outfits, then virtually view and compare previous choices on the mirror itself using intuitive hand gestures. Users control all their data and can remain anonymous to the retailer if they so choose. The Memory Mirror uses Intel integrated graphics technology to create avatars of the shopper wearing various clothing that can be shared with friends to solicit feedback or viewed instantly to make an immediate in-store purchase. Shoppers can also save their looks in mobile app should they decide to purchase at a later time online.
Jan 20, 2014
Thalmic Labs at TEDxToronto
Jan 12, 2014
Melody Shiue, an industrial design graduate from University of New South Wales, is proposing a wearable fetal ultrasound system to enhancing maternal-fetal bonding as a reassurance window. It is an e-textile based apparatus that uses 4D ultrasound. Latest stretchable display technology is also employed on the abdominal region, allowing other members of the family especially the father to connect with the foetus in its context. PreVue not only gives you the opportunity to interact and comprehend the physical growth of the baby, but also an early understanding of its personality as you see it yawning, rolling, smiling etc., bringing you closer till the day it finally rests into your arms.
More information at Tuvie