May 29, 2013
We hope you will join us in Los Angeles, California, on Saturday June 29th for attending the Symposium on Positive Technology.
The special session features interventions by key PT researchers and is a great opportunity to meet, share ideas and build the future of this exciting research field!
Download the conference program here (PDF)
May 26, 2013
Cross-Brain Neurofeedback: Scientific Concept and Experimental Platform.
PLoS One. 2013;8(5):e64590
Authors: Duan L, Liu WJ, Dai RN, Li R, Lu CM, Huang YX, Zhu CZ
Abstract. The present study described a new type of multi-person neurofeedback with the neural synchronization between two participants as the direct regulating target, termed as "cross-brain neurofeedback." As a first step to implement this concept, an experimental platform was built on the basis of functional near-infrared spectroscopy, and was validated with a two-person neurofeedback experiment. This novel concept as well as the experimental platform established a framework for investigation of the relationship between multiple participants' cross-brain neural synchronization and their social behaviors, which could provide new insight into the neural substrate of human social interactions.
A Hybrid Brain-Computer Interface-Based Mail Client.
Comput Math Methods Med. 2013;2013:750934
Authors: Yu T, Li Y, Long J, Li F
Abstract. Brain-computer interface-based communication plays an important role in brain-computer interface (BCI) applications; electronic mail is one of the most common communication tools. In this study, we propose a hybrid BCI-based mail client that implements electronic mail communication by means of real-time classification of multimodal features extracted from scalp electroencephalography (EEG). With this BCI mail client, users can receive, read, write, and attach files to their mail. Using a BCI mouse that utilizes hybrid brain signals, that is, motor imagery and P300 potential, the user can select and activate the function keys and links on the mail client graphical user interface (GUI). An adaptive P300 speller is employed for text input. The system has been tested with 6 subjects, and the experimental results validate the efficacy of the proposed method.
Using Music as a Signal for Biofeedback.
Int J Psychophysiol. 2013 Apr 23;
Authors: Bergstrom I, Seinfeld S, Arroyo-Palacios J, Slater M, Sanchez-Vives MV
Abstract. Studies on the potential benefits of conveying biofeedback stimulus using a musical signal have appeared in recent years with the intent of harnessing the strong effects that music listening may have on subjects. While results are encouraging, the fundamental question has yet to be addressed, of how combined music and biofeedback compares to the already established use of either of these elements separately. This experiment, involving young adults (N=24), compared the effectiveness at modulating participants' states of physiological arousal of each of the following conditions: A) listening to pre-recorded music, B) sonification biofeedback of the heart rate, and C) an algorithmically modulated musical feedback signal conveying the subject's heart rate. Our hypothesis was that each of the conditions (A), (B) and (C) would differ from the other two in the extent to which it enables participants to increase and decrease their state of physiological arousal, with (C) being more effective than (B), and both more than (A). Several physiological measures and qualitative responses were recorded and analyzed. Results show that using musical biofeedback allowed participants to modulate their state of physiological arousal at least equally well as sonification biofeedback, and much better than just listening to music, as reflected in their heart rate measurements, controlling for respiration-rate. Our findings indicate that the known effects of music in modulating arousal can therefore be beneficially harnessed when designing a biofeedback protocol.
Application of alpha/theta neurofeedback and heart rate variability training to young contemporary dancers: State anxiety and creativity.
Application of alpha/theta neurofeedback and heart rate variability training to young contemporary dancers: State anxiety and creativity.
Int J Psychophysiol. 2013 May 15;
Authors: Gruzelier JH, Thompson T, Redding E, Brandt R, Steffert T
Abstract. As one in a series on the impact of EEG-neurofeedback in the performing arts, we set out to replicate a previous dance study in which alpha/theta (A/T) neurofeedback and heart rate variability (HRV) biofeedback enhanced performance in competitive ballroom dancers compared with controls. First year contemporary dance conservatoire students were randomised to the same two psychophysiological interventions or a choreology instruction comparison group or a no-training control group. While there was demonstrable neurofeedback learning, there was no impact of the three interventions on dance performance as assessed by four experts. However, HRV training reduced anxiety and the reduction correlated with improved technique and artistry in performance; the anxiety scale items focussed on autonomic functions, especially cardiovascular activity. In line with the putative impact of hypnogogic training on creativity A/T training increased cognitive creativity with the test of unusual uses, but not insight problems. Methodological and theoretical implications are considered.
May 24, 2013
Are you interested in Positive Technology? Then come and join us on LinkedIn!
Our new group is the place to share expertise and brilliant ideas on positive applications of technology!
May 06, 2013
With the rapid adoption of mobile technologies and the proliferation of smartphones, new opportunities are emerging for the delivery of mental health services. And indeed, psychologists are starting to realize this potential: a recent survey by Luxton and coll. (2011) identified over 200 smartphone apps focused on behavioral health, covering a wide range of disorders, including developmental disorders, cognitive disorders, substance-related disorders as well as psychotic and mood disorders. These applications are used in behavioral health for several purposes, the most common of which are health education, assessment, homework and monitoring progress of treatment.
For example, T2 MoodTracker is an application that allows users to self-monitor, track and reference their emotional experience over a period of days, weeks and months using a visual analogue rating scale. Using this application, patients can self-monitor emotional experiences associated with common deployment-related behavioral health issues like post-traumatic stress, brain injury, life stress, depression and anxiety. Self-monitoring results can be used as self-help tool or they shared with a therapist or health care professional, providing a record of the patient’s emotional experience over a selected time frame.
Measuring objective correlatives of subjectively-reported emotional states is an important concern in research and clinical applications. Physiological and physical activity information provide mental health professionals with integrative measures, which can be used to improve understanding of patients’ self-reported feelings and emotions.
The combined use of wearable biosensors and smart phones offers unprecedented opportunities to collect, elaborate and transmit real-time body signals to the remote therapist. This approach is also useful to allow the patient collecting real-time information related to his/her health conditions and identifying specific trends. Insights gained by means of this feedback can empower the user to self-engage and manage his/her own health status, minimizing any interaction with other health care actors. One such tool is MyExperience, an open-source mobile platform that allows the combination of sensing and self-report to collect both quantitative and qualitative data on user experience and activity.
Other applications are designed to empower users with information for making better decisions, preventing life-style related conditions and preserving/enhancing cognitive performance. For example, BeWell monitors different user activities (sleep, physical activity, social interaction) and provides feedback to promote healthier lifestyle decisions.
Besides applications in mental health and wellbeing, smartphones are increasingly used in psychological research. The potential of this approach has been recently discussed by Geoffrey Miller in a review entitled “The Smartphone Psychology Manifesto”. According to Miller, smartphones can be effectively used to collect large quantities of ecologically valid data, in a easier and quicker way than other available research methodologies. Since the smartphone is becoming one of the most pervasive devices in our lives, it provides access to domains of behavioral data not previously available without either constant observation or reliance on self-reports only.
For example, the INTERSTRESS project, which I am coordinating, developed PsychLog, a psycho-physiological mobile data collection platform for mental health research. This free, open source experience sampling platform for Windows mobile allows collecting self-reported psychological data as well as ECG data via a bluetooth ECG sensor unit worn by the user. Althought PsychLog provides less features with respect to more advanced experience sampling platform, it can be easily configured also by researchers with no programming skills.
In summary, the use of smartphones can have a significant impact on both psychological research and practice. However, there is still limited evidence of the effectiveness of this approach. As for other mHealth applications, few controlled trials have tested the potential of mobile technology interventions in improving mental health care delivery processes. Therefore, further research is needed in order to determine the real cost-effectiveness of mobile cybertherapy applications.
Apr 05, 2013
Researchers at Vanderbilt University are studying the potential benefits of using human-looking robots as tools to help kids with autism spectrum disorder (ASD) improve their communication skills. The programmable NAO robot used in the study was developed by Aldebaran Robotics out of Paris, France, and offers the ability to be part of a larger, smarter system.
Though a child might feel like the pink eyed humanoid is an autonomous being, the NAO robot that the team is using is actually hooked up to computers and external cameras that track the kid’s movements. Using the newly developed ARIA (Adaptive Robot-Mediated Intervention Architecture) protocol, they found that children paid more attention to NAO and followed in exercises almost as well as with a human adult therapist.
Mar 11, 2013
Is virtual reality always an effective stressors for exposure treatments? Some insights from a controlled trial
Is virtual reality always an effective stressors for exposure treatments? Some insights from a controlled trial.
BMC psychiatry, 13(1) p. 52, 2013
Federica Pallavicini, Pietro Cipresso, Simona Raspelli, Alessandra Grassi, Silvia Serino, Cinzia Vigna, Stefano Triberti, Marco Villamira, Andrea Gaggioli, Giuseppe Riva
Abstract. Several research studies investigating the effectiveness of the different treatments have demonstrated that exposure-based therapies are more suitable and effective than others for the treatment of anxiety disorders. Traditionally, exposure may be achieved in two manners: in vivo, with direct contact to the stimulus, or by imagery, in the person’s imagination. However, despite its effectiveness, both types of exposure present some limitations that supported the use of Virtual Reality (VR). But is VR always an effective stressor? Are the technological breakdowns that may appear during such an experience a possible risk for its effectiveness? (...)
Full paper available here (open access)
Mar 03, 2013
“The new technology is a major breakthrough that has many advantages over current technology, which provides very limited functionality to patients with missing limbs,” Brånemark says.
Presently, robotic prostheses rely on electrodes over the skin to pick up the muscles electrical activity to drive few actions by the prosthesis. The problem with this approach is that normally only two functions are regained out of the tens of different movements an able-body is capable of. By using implanted electrodes, more signals can be retrieved, and therefore control of more movements is possible. Furthermore, it is also possible to provide the patient with natural perception, or “feeling”, through neural stimulation.
“We believe that implanted electrodes, together with a long-term stable human-machine interface provided by the osseointegrated implant, is a breakthrough that will pave the way for a new era in limb replacement,” says Rickard Brånemark.
Read full story
The Japanese communication robot destined to join the crew aboard the International Space Station (ISS) this summer recently underwent some zero gravity testing. The Kibo Robot Project, organized by Dentsu Inc. in response to a proposal made by the Japan Aerospace Exploration Agency, unveiled the final design of its diminutive humanoid robot and its Earthbound counterpart.
Watch the video:
1st International Workshop on Intelligent Digital Games for Empowerment and Inclusion
14 May 2013, Chania, Crete, Greece
chaired by Björn Schuller, Lucas Paletta, Nicolas Sabouret
Paper submission deadline: 11 March 2013
Digital Games for Empowerment and Inclusion possess the potential to change our society in a most positive way by preparing selected groups in a playful and fun way for their everyday life’s social and special situations. Exemplary domains span as far as from children with Autism Spectrum Condition to young adults preparing for their first job interviews or migrants familiarizing with their new environment. The current generation of such games thereby increasingly demands for computational intelligence algorithms to help analyze players’ behavior and monitor their motivation and interest to adapt game progress. The development of such games usually thus requires expertise from the general gaming domain, but in particular also from a game’s target domain, besides technological savoir-faire to provide intelligent analysis and reaction solutions. IDGEI 2013 aims at bridging across these communities and disciplines by inviting respective researchers and experts to discuss their latest perspectives and findings in the field of Intelligent Digital Games for Empowerment and Inclusion.
Suggested workshop topics include, but are by no means limited to:
- Machine Intelligence in Serious Games
- Mobile and Real-World Serious Gaming
- Emotion & Affect in Serious Games
- Player Behavior and Attention Modeling
- Player-Adaptation and Motivation
- Security & Privacy Preservation
- Novel Serious Games
- User Studies & Tests of Serious Games
Researchers at Duke University Medical Center in the US report in the February 28, 2013 issue of Scientific Reports the successful wiring together of sensory areas in the brains of two rats. The result of the experiment is that one rat will respond to the experiences to which the other is exposed.
The results of these projects suggest the future potential for linking multiple brains to form what the research team is calling an "organic computer," which could allow sharing of motor and sensory information among groups of animals.
"Our previous studies with brain-machine interfaces had convinced us that the rat brain was much more plastic than we had previously thought," said Miguel Nicolelis, M.D., PhD, lead author of the publication and professor of neurobiology at Duke University School of Medicine. "In those experiments, the rat brain was able to adapt easily to accept input from devices outside the body and even learn how to process invisible infrared light generated by an artificial sensor. So, the question we asked was, ‘if the brain could assimilate signals from artificial sensors, could it also assimilate information input from sensors from a different body?’"
To test this hypothesis, the researchers first trained pairs of rats to solve a simple problem: to press the correct lever when an indicator light above the lever switched on, which rewarded the rats with a sip of water. They next connected the two animals' brains via arrays of microelectrodes inserted into the area of the cortex that processes motor information.
One of the two rodents was designated as the "encoder" animal. This animal received a visual cue that showed it which lever to press in exchange for a water reward. Once this “encoder” rat pressed the right lever, a sample of its brain activity that coded its behavioral decision was translated into a pattern of electrical stimulation that was delivered directly into the brain of the second rat, known as the "decoder" animal.
The decoder rat had the same types of levers in its chamber, but it did not receive any visual cue indicating which lever it should press to obtain a reward. Therefore, to press the correct lever and receive the reward it craved, the decoder rat would have to rely on the cue transmitted from the encoder via the brain-to-brain interface.
The researchers then conducted trials to determine how well the decoder animal could decipher the brain input from the encoder rat to choose the correct lever. The decoder rat ultimately achieved a maximum success rate of about 70 percent, only slightly below the possible maximum success rate of 78 percent that the researchers had theorized was achievable based on success rates of sending signals directly to the decoder rat’s brain.
Importantly, the communication provided by this brain-to-brain interface was two-way. For instance, the encoder rat did not receive a full reward if the decoder rat made a wrong choice. The result of this peculiar contingency, said Nicolelis, led to the establishment of a "behavioral collaboration" between the pair of rats.
"We saw that when the decoder rat committed an error, the encoder basically changed both its brain function and behavior to make it easier for its partner to get it right," Nicolelis said. "The encoder improved the signal-to-noise ratio of its brain activity that represented the decision, so the signal became cleaner and easier to detect. And it made a quicker, cleaner decision to choose the correct lever to press. Invariably, when the encoder made those adaptations, the decoder got the right decision more often, so they both got a better reward."
In a second set of experiments, the researchers trained pairs of rats to distinguish between a narrow or wide opening using their whiskers. If the opening was narrow, they were taught to nose-poke a water port on the left side of the chamber to receive a reward; for a wide opening, they had to poke a port on the right side.
The researchers then divided the rats into encoders and decoders. The decoders were trained to associate stimulation pulses with the left reward poke as the correct choice, and an absence of pulses with the right reward poke as correct. During trials in which the encoder detected the opening width and transmitted the choice to the decoder, the decoder had a success rate of about 65 percent, significantly above chance.
To test the transmission limits of the brain-to-brain communication, the researchers placed an encoder rat in Brazil, at the Edmond and Lily Safra International Institute of Neuroscience of Natal (ELS-IINN), and transmitted its brain signals over the Internet to a decoder rat in Durham, N.C. They found that the two rats could still work together on the tactile discrimination task.
"So, even though the animals were on different continents, with the resulting noisy transmission and signal delays, they could still communicate," said Miguel Pais-Vieira, PhD, a postdoctoral fellow and first author of the study. "This tells us that it could be possible to create a workable, network of animal brains distributed in many different locations."
Nicolelis added, "These experiments demonstrated the ability to establish a sophisticated, direct communication linkage between rat brains, and that the decoder brain is working as a pattern-recognition device. So basically, we are creating an organic computer that solves a puzzle."
"But in this case, we are not inputting instructions, but rather only a signal that represents a decision made by the encoder, which is transmitted to the decoder’s brain which has to figure out how to solve the puzzle. So, we are creating a single central nervous system made up of two rat brains,” said Nicolelis. He pointed out that, in theory, such a system is not limited to a pair of brains, but instead could include a network of brains, or “brain-net.” Researchers at Duke and at the ELS-IINN are now working on experiments to link multiple animals cooperatively to solve more complex behavioral tasks.
"We cannot predict what kinds of emergent properties would appear when animals begin interacting as part of a brain-net. In theory, you could imagine that a combination of brains could provide solutions that individual brains cannot achieve by themselves," continued Nicolelis. Such a connection might even mean that one animal would incorporate another's sense of "self," he said.
"In fact, our studies of the sensory cortex of the decoder rats in these experiments showed that the decoder's brain began to represent in its tactile cortex not only its own whiskers, but the encoder rat's whiskers, too. We detected cortical neurons that responded to both sets of whiskers, which means that the rat created a second representation of a second body on top of its own." Basic studies of such adaptations could lead to a new field that Nicolelis calls the "neurophysiology of social interaction."
Such complex experiments will be enabled by the laboratory's ability to record brain signals from almost 2,000 brain cells at once. The researchers hope to record the electrical activity produced simultaneously by 10-30,000 cortical neurons in the next five years.
Such massive brain recordings will enable more precise control of motor neuroprostheses—such as those being developed by the Walk Again Project—to restore motor control to paralyzed people, Nicolelis said.
More to explore:
Sci. Rep. 3, 1319 (2013). PUBMED, , , &
Date: 17th, 18th and 19th July 2013
Location: Paris, France
The 2nd HCC summer school aims to share scientific knowledge and experience among participants, enhance and stimulate interdisciplinary dialogue as well as provide further opportunities for co-operation within the study domains of Human Computer Confluence.
The topics of the summer school will be framed around the following issues:
• re-experience yourself,
• experience being others,
• experience being together in more powerful ways,
• experience other environments,
• experience new senses,
• experience abstract data spaces.
The 2nd HCC summer school will try to benefit most from the research interests and the special facilities of the IRCAM institute, the last as a place dedicated to the coupling of art with the sciences of sound and media. Special attention will be given to the following thematic categories:
• Musical interfaces
• Interactive sound design
• Sensorimotor learning and gesture-sound interactive systems
• Croudsourcing and human computation approaches in artistic applications
The three-day summer school will include invited lectures by experts in the field, a round-table and practical workshops. During the workshops, participants will engage in hands-on HCC group projects that they will present at the end of the summer school.
• Isabelle Viaud-Delmon, Acoustic and cognitive spaces team, CNRS - IRCAM, France.
• Andrea Gaggioli, Department of Psychology, UCSC, Milan, Italy.
• Stephen Dunne, Neuroscience Department, STARLAB, Barcelona, Spain.
• Alois Ferscha, Pervasive computing lab, Johannes Kepler Universitat Linz, Austria.
• Fivos Maniatakos, Acoustic and Cognitive Spaces Group, IRCAM, France.
• Isabelle Viaud-Delmon, IRCAM
• Hugues Vinet, IRCAM
• Marine Taffou, IRCAM
• Sylvie Benoit, IRCAM
• Fivos Maniatakos, IRCAM
Feb 08, 2013
Scientific disciplines are usually classified in two broad categories: natural sciences and social sciences. Natural sciences investigate the physical, chemical and biological aspects of Earth, the Universe and the life forms that inhabit it. Social sciences (also defined human sciences) focus on the origin and development of human beings, societies, institutions, social relationships etc.
Natural sciences are often regarded as “hard” research disciplines, because they are based on precise numeric predictions about experimental data. Social sciences, on the other hand, are seen as “soft” because they tend to rely on more descriptive approaches to understand their object of study.
So, for example, while it has been possible to predict the existence and properties of the Higgs boson from the Standard Model of particle physics, it is not possible to predict the existence and properties of a psychological effect or phenomenon, at least with the same level of precision.
However, the most important difference between natural and social sciences is not in their final objective (since in both fields, hypotheses must be tested by empirical approaches), but in the methods and tools that they use to pursue that objective. Galileo Galilei argued that we cannot understand the universe “(…) if we do not first learn the language and grasp the symbols, in which it is written. This book is written in the mathematical language, and the symbols are triangles, circles and other geometrical figures, without whose help it is impossible to comprehend a single word of it; without which one wanders in vain through a dark labyrinth.”
But unlike astronomy, physics and chemistry, which are able to read the “book of nature” using increasingly sophisticated glasses (such as microscopes, telescopes, etc.), social sciences have no such tools to investigate social and mental processes within their natural contexts. To understand these phenomena, researchers can either focus on macroscopic aggregates of behaviors (i.e. sociology) or analyse microscopic aspects within controlled settings (i.e. psychology).
Despite these limitations, there is no doubt that social sciences have produced interesting findings: today we know much more about the human being than we did a century ago. But at the same time, advances in natural sciences have been far more impressive and groundbreaking. From the discovery of atomic energy to the sequencing of human genome, natural sciences have changed our life and could do it even more in the next decades.
However, thanks to the explosive growth of information and communication technologies, this state of things may change soon and lead to a paradigm shift in the way social phenomena are investigated. Actually, thanks to the pervasive diffusion of Internet and mobile computing devices, most of our daily activities leave a digital footprint, which provide data on what we have done and with whom.
Every single hour of our life produces an observable trace, which can be translated into numbers and aggregated to identify specific patterns or trends that would be otherwise impossible to quantify.
Thanks to the emergence of cloud computing, we are now able to collect these digital footprints in large online databases, which can be accessed by researchers for scientific purposes. These databases represent for social scientists the “book written in the mathematical language”, which they can eventually read. An enormous amount of data is already available - embedded in online social networks, organizations digital archives, or saved in the internal memory of our smartphones/tablets/PCs – although it is not always accessible (because within the domain of private companies and government agencies).
Social scientists are starting to realize that the advent of “big data” is offering unprecedented opportunities for advancing their disciplines. For example, Lazer and coll. recently published on Science (2009, 323:5915, pp. 721-723) a sort of “manifesto” of Computational Social Science, in which they explain the potential of this approach in collecting and analyzing data at a scale that may reveal patterns of individual and group behaviors.
However, in order to exploit this potential, social scientists have to open their minds to learn new tools, methods and approaches. Actually, the ability to analyse and make sense of huge quantities of data that change over time require mathematics and informatics skills that are usually not included in the training of the average social scientist. But acquiring new mathematical competences may not be enough. The majority of research psychologists, for example, is not familiar with using new technologies such as mobile computing tools, sensors or virtual environments. However, these tools may become the equivalent in psychology to what microscopes are for biology or telescopes are for astronomy.
If social scientists will open their minds to this new horizon, their impact on society could be at least as revolutionary as the one that natural scientists have produced in the last two centuries. The emergence of computational social science will not only allow scientists to predict many social phenomena, but also to unify levels of analysis that have been until now separately addressed, e.g. the neuro-psychological and the psycho-social levels.
At the same time, the transformative potential of this emerging science requires also a careful reflection on its ethical implications for the protection of privacy of participants.
Technologies for Affect and Wellbeing- Special Issue of theIEEE Transaction on Affective Computing.
- Rafael A. Calvo (The University of Sydney)
- Giuseppe Riva (ICE-NET Lab- Universitta Catolica del Sacro Cuore, Milan Italy)
- Christine Lisetti (Florida International University)
Background and Motivation
There is an increased interest in using computer interaction to detect and support users’ physical and psychological wellbeing. Computers can afford multiple forms of transformational experiences. Some of these experiences can be purposely designed to, for example, detect and regulate students’ affective states to improve aspects of their learning experiences. They can also be used in computer-based psychological interventions that treat psychological illness or that preventively promote wellbeing, healthy lifestyles, and mental health.
The application domain, so far referred to as ‘positive computing’, ‘positive technologies’, and ‘positive design’, draws on ideas from positive psychology, particularly the extensive research on developing human strengths and wellbeing. It is closely linked to the HCI work on personal informatics, and the development of tools that help people learn more about themselves through reflection.
This special issue will focus on ideas, methods and case studies for how affective computing can contribute to this goal. Articles should discuss how information that computers collect about our behaviour, cognition – and particularly affect can be used in the further understanding, nurturing or development of wellbeing and human strengths: e.g. self-understanding, empathy, intrinsic motivation toward wellbeing healthy lifestyles.
Topics include, but are not limited to:
- Systems to detect or support positive emotions and human strengths for example Reflection, Empathy, Happiness, Gratitude, Self-understanding/ interpersonal skills, Emotional intelligence/ emotion regulation, Social intelligence/ intrapersonal skills, Motivation.
- Using affect and motivation for physical and psychological health.
- Cyberpsychology for positive psychology and wellbeing
- HCI design strategies for support of wellbeing and human strengths
- Virtual Reality for support of wellbeing or human strengths
- Positive personal health informatics for health promotion
- Patient-centered technologies for healthy behaviour change
- Empathic intelligent virtual agents for lifestyle monitoring and behaviour change
- Mobile applications of affective computing for health and wellbeing
- Informatics technologies for patient empowerment
- Call for Papers out: Feb 2013
- Submission Deadline: July 1st, 2013
- Notification of Acceptance: October 1st, 2013
- Final Manuscripts Due: December 1st, 2013
- Date of Publication: March or July 2014
The Transactions on Affective Computing Special Issue on “Affect and wellbeing” will consist of papers on techniques, methods, case studies and their evaluation. Some papers may survey various aspects of the topic, particularly in ways that bring the psychological, health and wellbeing, and technical literature together. The balance between these will be adjusted to maximize the impact of the special issue. All articles are expected to follow the standard review procedures for the IEEE TAC.
At the end of January, the European Commission has officially announced the selection of the Human Brain Project (HBP) as one of its two FET Flagship projects. Federating more than 80 European and international research institutions, the Human Brain Project is planned to last ten years (2013-2023). The cost is estimated at 1.19 billion euros.
The project is the first attempt to “reconstruct the brain piece by piece and building a virtual brain in a supercomputer”. Lead by neuroscientist Henry Markram, the project was launched in 2005 as a joint research initiative between the Brain Mind Institute at the École Polytechnique Fédérale de Lausanne (EPFL) and the information technology giant IBM.
Using the impressive processing power of IBM’s Blue Gene/L supercomputer, the project reached its first milestone in December 2006, simulating a rat cortical column. As of July 2012, Henry Markram’s team has achieved the simulation of mesocircuits containing approximately 1 million neurons and 1 billion synapses (which is comparable with the number of nerve cells present in a honey bee brain). The next step, planned in 2014, will be the modelling of a cellular rat brain, with 100 mesocircuits totalling a hundred million cells. Finally, the team plans to simulate a full human brain (86 billion neurons) by the year 2023.
Watch the video overview of the Human Brain Project
Nov 11, 2012
Thanks to the accellerated diffusion of smartphones, the number of mobile healthcare apps has been growing exponentially in the past few years. Applications now exist to help patients managing diabetes, sharing information with peers, and monitoring mood, just to name a few examples.
Such “applification” of health is part of a larger trend called “mobile health” (or mHealth), which broadly refers to the provision of health-related services via wireless communications. Mobile health is a fast-growing market: according to a report by PEW Research as early as in 2011, 17 percent of mobile users were using their phones to look up health and medical information, and Juniper recently estimated that in the same year 44 million health apps were downloaded.
The field of mHealth has received a great deal of attention by the scientific community over the past few years, as evidenced by the number of conferences, workshops and publications dedicated to this subject; international healthcare institutions and organizations are also taking mHealth seriously.
For example, the UK Department of Health recently launched the crowdsourcing project Maps and Apps, to support the use of existing mobile phone apps and health information maps, as well as encourage people to put forward ideas for new ones. The initiative resulted in the collection of 500 health apps voted most popular by the public and health professionals, as well as a list of their ideas for new apps. At the moment of writing this post, the top-rated app is Moodscope, an application that allows users to measure, track and record comments on their mood. Other popular apps include HealthUnlocked, an online support network that connects people, volunteers and professionals to help learn, share and give practical support to one another, and FoodWiz.co, an application created by a mother of children with food allergies that which allows users to scan the bar codes on food to instantly find out which allergens are present. An app to help patients manage diabetes could not be missing from the list: Diabetes UK Tracker allows the patient to enter measurements such as blood glucose, caloric intake and weight, which can be displayed as graphs and shared with doctors; the software also features an area where patients can annotate medical information, personal feelings and thoughts.
The astounding popularity of Maps and Apps initiative suggests the beginning of a new era in medical informatics, yet this emerging vision is not without caveats. As recently emphasized by Niall Boyce on the June issue of The Lancet Technology, the main concern associated with the use of apps as a self-management tool is the limited evidence of their effectivenes in improving health. Differently from other health interventions, mHealth apps have not been subject to rigorous testing. A potential reason for the lack of randomized evaluations is the fact that most of these apps reach consumers/patients directly, without passing through the traditional medical gatekeepers. However, as Boyce suggests, the availability of trial data would not only benefit patients, but also app developers, who could bring to the market more effective and reliable products. A further concern is related to privacy and security of medical data. Although most smartphone-based medical applications apply state-of-the-art secure protocols, the wireless utilization of these devices opens up new vulnerabilities to patients and medical facilities. A recent bulletin issued by the U.S. Department of Homeland Security lists five of the top mobile medical device security risks:
- Insider: The most common ways employees steal data involves network transfer, be that email, remote access, or file transfer;
- Malware: These include keystroke loggers and Trojans, tailored to harvest easily accessible data once inside the network;
- Spearphishing: This highly-customized technique involves an email-based attack carrying malicious attack disguised as coming from a legitimate source, and seeking specific information;
- Lost equipment: A significant problem because it happens so frequently, even a smartphone in the wrong hands can be a gateway into a health entity’s network and records. And the more that patient information is stored electronically, the greater the number of people potentially affected when equipment is lost or stolen.
In conclusion, the “applification of healthcare” is at the same time a great opportunity for patients and a great responsibility medical professionals and developers. In order to exploit this opportunity while mitigating risks, it is essential to put in place quality evaluation procedures, which allow to monitor and optimize the effectiveness of these applications according to evidence-based standards. For example, iMedicalApps, provides independent reviews of mobile medical technology and applications by a team of physicians and medical students. Founded by founded by Dr. Iltifat Husain, an emergency medical resident at the Wake Forest University School of Medicine, iMedicalApps has been referred by Cochrane Collaboration as an evidence-based trusted Web 2.0 website.
More to explore:
Read the PVC report: Current and future state of mhealth (PDF FULL TEXT)
Watch the MobiHealthNews video report: What is mHealth?
Congenitally blind people have learned to ”see” and describe objects, and even identify letters and words, by using a visual-to-auditory sensory-substitution algorithm and sensory substitution devices (SSDs), scientists at Hebrew University and in France have found.
SSDs are non-invasive sensory aids that provide visual information to the blind via their existing senses. For example, using a visual-to-auditory SSD in a clinical or everyday setting, users wear a miniature camera connected to a small computer (or smart phone) and stereo headphones.
The images are converted into “soundscapes,” using an algorithm, allowing the user to listen to and then interpret the visual information coming from the camera. The blind participants using this device reach a level of visual acuity technically surpassing the criterion of the World Health Organization (WHO) for blindness.
Read the full story
Nov 02, 2012
Gaggioli A., Riva G., Milani L., Mazzoni E.