Apr 29, 2014

Evidence, Enactment, Engagement: The three 'nEEEds' of mental mHealth

Originally posted on Digital Agenda for Europe
 
As a researcher in the field of mHealth & mental health, I welcome the "Green Paper on Mobile Health" recently published by the European Commission. I believe that this document can provide a useful platform for discussing key issues related to the deployment of mHealth, thereby contributing to bridge the gap between policy, research and practice.

Actually, according to my experience, citizens and public stakeholders are not well-informed or educated about mHealth. For example, to many people the idea of using phones to deliver mental health programs still sounds weird.

Yet the number of mental health apps is rapidly growing: a recent survey identified 200 unique mobile tools specifically associated with behavioral health.

These applications now cover a wide array of clinical areas including developmental disorders, cognitive disorders, substance-related disorders, as well as psychotic and mood disorders.

I think that the increasing "applification" of mental health is explained by three potential benefits of this approach:

  • First, mobile apps can be integrated in different stages of treatment: from promoting awareness of disease, to increasing treatment compliance, to preventing relapse.
  • Furthermore, mobile tools can be used to monitor behavioural and psychological symptoms in everyday life: self-reported data can be complemented with readings from inbuilt or wearable sensors to fine-tune treatment according to the individual patient’s needs.
  • Last - but not least - mobile applications can help patients to stay on top of current research, facilitating access to evidence-based care. For example, in the EC-funded INTERSTRESS project, we investigated these potentials in the assessment and management of psychological stress, by developing different mobile applications (including the award-winning Positive Technology app) for helping people to monitor stress levels “on the go” and learn new relaxation skills.

In short, I believe that mental mHealth has the potential to provide the right care, at the right time, at the right place. However, from my personal experience I have identified three key challenges that must be faced in order to realize the potential of this approach.

I call them the three "nEEEds" of mental mHealth: evidence, engagement, enactment.

  • Evidence refers to the need of clinical proof of efficacy or effectiveness to be provided using randomised trials.
  • Engagement is related to the need of ensuring usability and accessibility for mobile interfaces: this goes beyond reducing use errors that may generate risks of psychological discomfort for the patient, to include the creation of a compelling and engaging user experience.
  • Finally, enactment concerns the need that appropriate regulations enacted by competent authorities catch up with mHealth technology development.

Being myself a beneficiary of EC-funded grants, I can recognize that R&D investments on mHealth made by EC across FP6 and FP7 have contributed to position Europe at the forefront of this revolution. And the return of this investment could be strong: it has been predicted that full exploitation of mHealth solutions could lead to nearly 100 billion EUR savings in total annual EU healthcare spend in 2017.

I believe that a progressively larger portion of these savings may be generated by the adoption of mobile solutions in the mental health sector: actually, in the WHO European Region, mental ill health accounts for almost 20% of the burden of disease.

For this prediction to be fulfilled, however, many barriers must be overcome: thethree "nEEEds" of mental mHealth are probably only the start of the list. Hopefully, the Green Paper consultation will help to identify further opportunities and concerns that may be facing mental mHealth, in order to ensure a successful implementation of this approach.

Commodore 64 commercial (1985)

Apr 15, 2014

A post-stroke rehabilitation system integrating robotics, VR and high-resolution EEG imaging

A post-stroke rehabilitation system integrating robotics, VR and high-resolution EEG imaging.

IEEE Trans Neural Syst Rehabil Eng. 2013 Sep;21(5):849-59

Authors: Steinisch M, Tana MG, Comani S

Abstract
We propose a system for the neuro-motor rehabilitation of upper limbs in stroke survivors. The system is composed of a passive robotic device (Trackhold) for kinematic tracking and gravity compensation, five dedicated virtual reality (VR) applications for training of distinct movement patterns, and high-resolution EEG for synchronous monitoring of cortical activity. In contrast to active devices, the Trackhold omits actuators for increased patient safety and acceptance levels, and for reduced complexity and costs. VR applications present all relevant information for task execution as easy-to-understand graphics that do not need any written or verbal instructions. High-resolution electroencephalography (HR-EEG) is synchronized with kinematic data acquisition, allowing for the epoching of EEG signals on the basis of movement-related temporal events. Two healthy volunteers participated in a feasibility study and performed a protocol suggested for the rehabilitation of post-stroke patients. Kinematic data were analyzed by means of in-house code. Open source packages (EEGLAB, SPM, and GMAC) and in-house code were used to process the neurological data. Results from kinematic and EEG data analysis are in line with knowledge from currently available literature and theoretical predictions, and demonstrate the feasibility and potential usefulness of the proposed rehabilitation system to monitor neuro-motor recovery.

Brain-computer interfaces: a powerful tool for scientific inquiry

Brain-computer interfaces: a powerful tool for scientific inquiry.

Curr Opin Neurobiol. 2014 Apr;25C:70-75

Authors: Wander JD, Rao RP

Abstract. Brain-computer interfaces (BCIs) are devices that record from the nervous system, provide input directly to the nervous system, or do both. Sensory BCIs such as cochlear implants have already had notable clinical success and motor BCIs have shown great promise for helping patients with severe motor deficits. Clinical and engineering outcomes aside, BCIs can also be tremendously powerful tools for scientific inquiry into the workings of the nervous system. They allow researchers to inject and record information at various stages of the system, permitting investigation of the brain in vivo and facilitating the reverse engineering of brain function. Most notably, BCIs are emerging as a novel experimental tool for investigating the tremendous adaptive capacity of the nervous system.

Android Wear

Via KurzweilAI.net

Google has announced Android Wear, a project that extends Android to wearables, starting with two watches, both due out this Summer: Motorola’s Moto 360 and LG’s G Watch.

Android Wear will show you info from the wide variety of Android apps, such as messages, social apps, chats, notifications, health and fitness, music playlists, and videos.

It will also enable Google Now functions — say “OK, Google” for flight times, sending a text, weather, view email, get directions, travel time, making a reservation, etc..

Google says it’s working with several other consumer-electronics manufacturers, including Asus, HTC, and Samsung; chip makers Broadcom, Imagination, Intel, Mediatek and Qualcomm; and fashion brands like the Fossil Group to offer watches powered by Android Wear later this year.

If you’re a developer, there’s a new section on developer.android.com/wear focused on wearables. Starting today, you can download a Developer Preview so you can tailor your existing app notifications for watches powered by Android Wear.

A Hybrid Brain Computer Interface System Based on the Neurophysiological Protocol and Brain-actuated Switch for Wheelchair Control

A Hybrid Brain Computer Interface System Based on the Neurophysiological Protocol and Brain-actuated Switch for Wheelchair Control.

J Neurosci Methods. 2014 Apr 5;

Authors: Cao L, Li J, Ji H, Jiang C

BACKGROUND: Brain Computer Interfaces (BCIs) are developed to translate brain waves into machine instructions for external devices control. Recently, hybrid BCI systems are proposed for the multi-degree control of a real wheelchair to improve the systematical efficiency of traditional BCIs. However, it is difficult for existing hybrid BCIs to implement the multi-dimensional control in one command cycle.
NEW METHOD: This paper proposes a novel hybrid BCI system that combines motor imagery (MI)-based bio-signals and steady-state visual evoked potentials (SSVEPs) to control the speed and direction of a real wheelchair synchronously. Furthermore, a hybrid modalities-based switch is firstly designed to turn on/off the control system of the wheelchair.
RESULTS: Two experiments were performed to assess the proposed BCI system. One was implemented for training and the other one conducted a wheelchair control task in the real environment. All subjects completed these tasks successfully and no collisions occurred in the real wheelchair control experiment.
COMPARISON WITH EXISTING METHOD(S): The protocol of our BCI gave much more control commands than those of previous MI and SSVEP-based BCIs. Comparing with other BCI wheelchair systems, the superiority reflected by the index of path length optimality ratio validated the high efficiency of our control strategy.
CONCLUSIONS: The results validated the efficiency of our hybrid BCI system to control the direction and speed of a real wheelchair as well as the reliability of hybrid signals-based switch control.

Avegant - Glyph Kickstarter - Wearable Retinal Display

Via Mashable

Move over Google Glass and Oculus Rift, there's a new kid on the block: Glyph, a mobile, personal theater.

Glyph looks like a normal headset and operates like one, too. That is, until you move the headband down over your eyes and it becomes a fully-functional visual visor that displays movies, television shows, video games or any other media connected via the attached HDMI cable.

Using Virtual Retinal Display (VRD), a technology that mimics the way we see light, the Glyph projects images directly onto your retina using one million micromirrors in each eye piece. These micromirrors reflect the images back to the retina, producing a reportedly crisp and vivid quality.

A sweet, sad stop-motion film made with 3-D printing

Via Wired

London-based creative agency DBLG shows the way with “Bears on Stairs,” a short clip that combines a 3-D printed hero with traditional stop-motion animation to charming effect. The ursine epic has a 2-second run time and took four weeks to complete, making it about as efficient as your average Michael Bay production, by my rough calculations. The lumbering action took 50 printed models in all.

BEARS ON STAIRS from DBLG on Vimeo.

22:32 Posted in Cyberart | Permalink | Comments (0)

First video game

Fifty years ago, before either arcades or home video games, visitors waited in line at Brookhaven National Laboratory to play Tennis for Two, an electronic tennis game that is unquestionably a forerunner of the modern video game. Two people played the electronic tennis game with separate controllers that connected to an analog computer and used an oscilloscope for a screen. The game's creator, William Higinbotham, was a physicist who lobbied for nuclear nonproliferation as the first chair of the Federation of American Scientists.

Apr 06, 2014

Measuring the effects through time of the influence of visuomotor and visuotactile synchronous stimulation on a virtual body ownership illusion

Measuring the effects through time of the influence of visuomotor and visuotactile synchronous stimulation on a virtual body ownership illusion.

Perception. 2014;43(1):43-58

Authors: Kokkinara E, Slater M

Abstract. Previous studies have examined the experience of owning a virtual surrogate body or body part through specific combinations of cross-modal multisensory stimulation. Both visuomotor (VM) and visuotactile (VT) synchronous stimulation have been shown to be important for inducing a body ownership illusion, each tested separately or both in combination. In this study we compared the relative importance of these two cross-modal correlations, when both are provided in the same immersive virtual reality setup and the same experiment. We systematically manipulated VT and VM contingencies in order to assess their relative role and mutual interaction. Moreover, we present a new method for measuring the induced body ownership illusion through time, by recording reports of breaks in the illusion of ownership ('breaks') throughout the experimental phase. The balance of the evidence, from both questionnaires and analysis of the breaks, suggests that while VM synchronous stimulation contributes the greatest to the attainment of the illusion, a disruption of either (through asynchronous stimulation) contributes equally to the probability of a break in the illusion.

Glass brain flythrough: beyond neurofeedback

Via Neurogadget

Researchers have developed a new way to explore the human brain in virtual reality. The system, called Glass Brain, which is developed by Philip Rosedale, creator of the famous game Second Life, and Adam Gazzaley, a neuroscientist at the University of California San Francisco, combines brain scanning, brain recording and virtual reality to allow a user to journey through a person’s brain in real-time.

Read the full story on Neurogadget

Stick-on electronic patches for health monitoring

flex patch Flexible Skin Worn Patch Monitors EEG, ECG, Sends Recorded Data via Wireless (VIDEO)

Researchers at at John A. Rogers’ lab at the University of Illinois, Urbana-Champaign have incorporated off-the-shelf chips into fexible electronic patches to allow for high quality ECG and EEG monitoring.

Here is the video:

The effects of augmented visual feedback during balance training in Parkinson's disease - trial protocol

The effects of augmented visual feedback during balance training in Parkinson's disease: study design of a randomized clinical trial.

BMC Neurol. 2013;13:137

Authors: van den Heuvel MR, van Wegen EE, de Goede CJ, Burgers-Bots IA, Beek PJ, Daffertshofer A, Kwakkel G

Abstract. BACKGROUND: Patients with Parkinson's disease often suffer from reduced mobility due to impaired postural control. Balance exercises form an integral part of rehabilitative therapy but the effectiveness of existing interventions is limited. Recent technological advances allow for providing enhanced visual feedback in the context of computer games, which provide an attractive alternative to conventional therapy. The objective of this randomized clinical trial is to investigate whether a training program capitalizing on virtual-reality-based visual feedback is more effective than an equally-dosed conventional training in improving standing balance performance in patients with Parkinson's disease.
METHODS/DESIGN: Patients with idiopathic Parkinson's disease will participate in a five-week balance training program comprising ten treatment sessions of 60 minutes each. Participants will be randomly allocated to (1) an experimental group that will receive balance training using augmented visual feedback, or (2) a control group that will receive balance training in accordance with current physical therapy guidelines for Parkinson's disease patients. Training sessions consist of task-specific exercises that are organized as a series of workstations. Assessments will take place before training, at six weeks, and at twelve weeks follow-up. The functional reach test will serve as the primary outcome measure supplemented by comprehensive assessments of functional balance, posturography, and electroencephalography. DISCUSSION: We hypothesize that balance training based on visual feedback will show greater improvements on standing balance performance than conventional balance training. In addition, we expect that learning new control strategies will be visible in the co-registered posturographic recordings but also through changes in functional connectivity.