Ok

By continuing your visit to this site, you accept the use of cookies. These ensure the smooth running of our services. Learn more.

Dec 24, 2013

The Creative Link: Investigating the Relationship Between Social Network Indices, Creative Performance and Flow in Blended Teams

The Creative Link: Investigating the Relationship Between Social Network Indices, Creative Performance and Flow in Blended Teams

Andrea Gaggioli, Elvis Mazzoni, Luca Milani, Giuseppe Riva

Computers in Human Behavior. 01/2014; Forthcoming publication. DOI:10.1016/j.chb.2013.12.003

This study presents findings of an exploratory study, which has investigated the relationship between indices of social network structure, flow and creative performance in students collaborating in blended setting. Thirty undergraduate students enrolled in a Media Psychology course were included in five groups, which were tasked with designing a new technology-based psychological application. Team members collaborated over a twelve-week period using two main modalities: face-to-face meeting sessions in the classroom (once a week) and virtually using a groupware tool. Social network indicators of group interaction and presence indices were extracted from communication logs, whereas flow and product creativity were assessed through survey measures. Findings showed that specific social network indices (in particular those measuring decentralization and neighbor interaction) were positively related with flow experience. More broadly, results indicated that selected social network indicators can offer useful insight into the creative collaboration process. Theoretical and methodological implications of these results are drawn.

Nov 23, 2013

Overwhelming technology disrupting life and causing stress new study shows

A new study shows that over one third of people feel overwhelmed by technology today and are more likely to feel less satisfied with their life as a whole.

The study, conducted by the University of Cambridge and sponsored by BT, surveyed 1,269 people including in-depth interviews with families in the UK, also found that people who felt in control of their use of communications technology were more likely to be more satisfied with life.

Read the full story

18:06 Posted in Research tools | Permalink | Comments (0)

Oct 31, 2013

Brain Decoding

Via IEET

Neuroscientists are starting to decipher what a person is seeing, remembering and even dreaming just by looking at their brain activity. They call it brain decoding.  

In this Nature Video, we see three different uses of brain decoding, including a virtual reality experiment that could use brain activity to figure out whether someone has been to the scene of a crime.

Mobile EEG and its potential to promote the theory and application of imagery-based motor rehabilitation

Mobile EEG and its potential to promote the theory and application of imagery-based motor rehabilitation.

Int J Psychophysiol. 2013 Oct 18;

Authors: Kranczioch C, Zich C, Schierholz I, Sterr A

Abstract. Studying the brain in its natural state remains a major challenge for neuroscience. Solving this challenge would not only enable the refinement of cognitive theory, but also provide a better understanding of cognitive function in the type of complex and unpredictable situations that constitute daily life, and which are often disturbed in clinical populations. With mobile EEG, researchers now have access to a tool that can help address these issues. In this paper we present an overview of technical advancements in mobile EEG systems and associated analysis tools, and explore the benefits of this new technology. Using the example of motor imagery (MI) we will examine the translational potential of MI-based neurofeedback training for neurological rehabilitation and applied research.

Sep 10, 2013

BITalino: Do More!

BITalino is a low-cost toolkit that allows anyone from students to professional developers to create projects and applications with physiological sensors. Out of the box, BITalino already integrates easy to use software & hardware blocks with sensors for electrocardiography (ECG), electromyography (EMG), electrodermal activity (EDA), an accelerometer, & ambient light. Imagination is the limit; each individual block can be snapped off and combined to prototype anything you want. You can connect others sensors, including your own custom designs.

Aug 07, 2013

On Phenomenal Consciousness

A recent introductory talk on the problem that consciousness and qualia presents to physicalism by Frank C. Jackson.

Welcome to wonderland: the influence of the size and shape of a virtual hand on the perceived size and shape of virtual objects

Welcome to wonderland: the influence of the size and shape of a virtual hand on the perceived size and shape of virtual objects.

PLoS One. 2013;8(7):e68594

Authors: Linkenauger SA, Leyrer M, Bülthoff HH, Mohler BJ

The notion of body-based scaling suggests that our body and its action capabilities are used to scale the spatial layout of the environment. Here we present four studies supporting this perspective by showing that the hand acts as a metric which individuals use to scale the apparent sizes of objects in the environment. However to test this, one must be able to manipulate the size and/or dimensions of the perceiver's hand which is difficult in the real world due to impliability of hand dimensions. To overcome this limitation, we used virtual reality to manipulate dimensions of participants' fully-tracked, virtual hands to investigate its influence on the perceived size and shape of virtual objects. In a series of experiments, using several measures, we show that individuals' estimations of the sizes of virtual objects differ depending on the size of their virtual hand in the direction consistent with the body-based scaling hypothesis. Additionally, we found that these effects were specific to participants' virtual hands rather than another avatar's hands or a salient familiar-sized object. While these studies provide support for a body-based approach to the scaling of the spatial layout, they also demonstrate the influence of virtual bodies on perception of virtual environments.

What Color is My Arm? Changes in Skin Color of an Embodied Virtual Arm Modulates Pain Threshold

What Color is My Arm? Changes in Skin Color of an Embodied Virtual Arm Modulates Pain Threshold.

Front Hum Neurosci. 2013;7:438

Authors: Martini M, Perez-Marcos D, Sanchez-Vives MV

It has been demonstrated that visual inputs can modulate pain. However, the influence of skin color on pain perception is unknown. Red skin is associated to inflamed, hot and more sensitive skin, while blue is associated to cyanotic, cold skin. We aimed to test whether the color of the skin would alter the heat pain threshold. To this end, we used an immersive virtual environment where we induced embodiment of a virtual arm that was co-located with the real one and seen from a first-person perspective. Virtual reality allowed us to dynamically modify the color of the skin of the virtual arm. In order to test pain threshold, increasing ramps of heat stimulation applied on the participants' arm were delivered concomitantly with the gradual intensification of different colors on the embodied avatar's arm. We found that a reddened arm significantly decreased the pain threshold compared with normal and bluish skin. This effect was specific when red was seen on the arm, while seeing red in a spot outside the arm did not decrease pain threshold. These results demonstrate an influence of skin color on pain perception. This top-down modulation of pain through visual input suggests a potential use of embodied virtual bodies for pain therapy.

Full text open access

What Color Is Your Night Light? It May Affect Your Mood

When it comes to some of the health hazards of light at night, a new study suggests that the color of the light can make a big difference.

Read full story on Science Daily

Detecting delay in visual feedback of an action as a monitor of self recognition

Detecting delay in visual feedback of an action as a monitor of self recognition.

Exp Brain Res. 2012 Oct;222(4):389-97

Authors: Hoover AE, Harris LR

Abstract. How do we distinguish "self" from "other"? The correlation between willing an action and seeing it occur is an important cue. We exploited the fact that this correlation needs to occur within a restricted temporal window in order to obtain a quantitative assessment of when a body part is identified as "self". We measured the threshold and sensitivity (d') for detecting a delay between movements of the finger (of both the dominant and non-dominant hands) and visual feedback as seen from four visual perspectives (the natural view, and mirror-reversed and/or inverted views). Each trial consisted of one presentation with minimum delay and another with a delay of between 33 and 150 ms. Participants indicated which presentation contained the delayed view. We varied the amount of efference copy available for this task by comparing performances for discrete movements and continuous movements. Discrete movements are associated with a stronger efference copy. Sensitivity to detect asynchrony between visual and proprioceptive information was significantly higher when movements were viewed from a "plausible" self perspective compared with when the view was reversed or inverted. Further, we found differences in performance between dominant and non-dominant hand finger movements across the continuous and single movements. Performance varied with the viewpoint from which the visual feedback was presented and on the efferent component such that optimal performance was obtained when the presentation was in the normal natural orientation and clear efferent information was available. Variations in sensitivity to visual/non-visual temporal incongruence with the viewpoint in which a movement is seen may help determine the arrangement of the underlying visual representation of the body.

Jul 23, 2013

A mobile data collection platform for mental health research

A mobile data collection platform for mental health research

Personal and Ubiquitous Computing (2013), Volume 17, Issue 2, pp 241-251.
 
A. Gaggioli, G. Pioggia, G. Tartarisco, G. Baldus, D. Corda, P. Cipresso, G. Riva
 
Ubiquitous computing technologies offer exciting new possibilities for monitoring and analyzing user’s experience in real time. In this paper, we describe the design and development of Psychlog, a mobile phone platform designed to collect users’ psychological, physiological, and activity information for mental health research. The tool allows administering self-report questionnaires at specific times or randomly within a day. The system also permits to collect heart rate and activity information from a wireless electrocardiogram equipped with a three-axial accelerometer. By combining self-reports with heart rate and activity data, the application makes it possible to investigate the relationship between psychological, physiological, and behavioral variables, as well as to monitor their fluctuations over time. The software runs on Windows mobile operative system and is available as open source (http://sourceforge.net/projects/psychlog/).
 
Read the full text.


Illusory ownership of a virtual child body causes overestimation of object sizes and implicit attitude changes

Illusory ownership of a virtual child body causes overestimation of object sizes and implicit attitude changes.

Proc Natl Acad Sci USA. 2013 Jul 15;

Authors: Banakou D, Groten R, Slater M

Abstract. An illusory sensation of ownership over a surrogate limb or whole body can be induced through specific forms of multisensory stimulation, such as synchronous visuotactile tapping on the hidden real and visible rubber hand in the rubber hand illusion. Such methods have been used to induce ownership over a manikin and a virtual body that substitute the real body, as seen from first-person perspective, through a head-mounted display. However, the perceptual and behavioral consequences of such transformed body ownership have hardly been explored. In Exp. 1, immersive virtual reality was used to embody 30 adults as a 4-y-old child (condition C), and as an adult body scaled to the same height as the child (condition A), experienced from the first-person perspective, and with virtual and real body movements synchronized. The result was a strong body-ownership illusion equally for C and A. Moreover there was an overestimation of the sizes of objects compared with a nonembodied baseline, which was significantly greater for C compared with A. An implicit association test showed that C resulted in significantly faster reaction times for the classification of self with child-like compared with adult-like attributes. Exp. 2 with an additional 16 participants extinguished the ownership illusion by using visuomotor asynchrony, with all else equal. The size-estimation and implicit association test differences between C and A were also extinguished. We conclude that there are perceptual and probably behavioral correlates of body-ownership illusions that occur as a function of the type of body in which embodiment occurs.

The Size of Electronic Consumer Devices Affects Our Behavior

Re-blogged from Textually.org

3014093-inline-i-1-how-your-iphone-weakens-your-will.jpg

What are our devices doing to us? We already know they're snuffing our creativity - but new research suggests they're also stifling our drive. How so? Because fussing with them on average 58 minutes a day leads to bad posture, FastCompany reports.

 

The body posture inherent in operating everyday gadgets affects not only your back, but your demeanor, reports a new experimental study entitled iPosture: The Size of Electronic Consumer Devices Affects Our Behavior. It turns out that working on a relatively large machine (like a desktop computer) causes users to act more assertively than working on a small one (like an iPad).

That poor posture, Harvard Business School researchers Maarten Bos and Amy Cuddy find, undermines our assertiveness.

Read more.

Jul 18, 2013

Advanced ‘artificial skin’ senses touch, humidity, and temperature

Artificial skin (credit: Technion) Technion-Israel Institute of Technology scientists have discovered how to make a new kind of flexible sensor that one day could be integrated into “electronic skin” (e-skin) — a covering for prosthetic limbs.


Untitled 2_SM-1.jpg

Read full story

US Army avatar role-play Experiment #3 now open for public registration

Military Open Simulator Enterprise Strategy (MOSES) is secure virtual world software designed to evaluate the ability of OpenSimulator to provide independent access to a persistent, virtual world. MOSES is a research project of the United States Army Simulation and Training Center.  STTC’s Virtual World Strategic Applications team uses OpenSimulator to add capability and flexibility to virtual training scenarios. 

 

Scenario Details

Experiment Description


Identifying human emotions based on brain activity

For the first time, scientists at Carnegie Mellon University have identified which emotion a person is experiencing based on brain activity.

The study, published in the June 19 issue of PLOS ONE, combines functional magnetic resonance imaging (fMRI) and machine learning to measure brain signals to accurately read emotions in individuals. Led by researchers in CMU’s Dietrich College of Humanities and Social Sciences, the findings illustrate how the brain categorizes feelings, giving researchers the first reliable process to analyze emotions. Until now, research on emotions has been long stymied by the lack of reliable methods to evaluate them, mostly because people are often reluctant to honestly report their feelings. Further complicating matters is that many emotional responses may not be consciously experienced.

HappySadBrain

Identifying emotions based on neural activity builds on previous discoveries by CMU’s Marcel Just and Tom M. Mitchell, which used similar techniques to create a computational model that identifies individuals’ thoughts of concrete objects, often dubbed “mind reading.”

“This research introduces a new method with potential to identify emotions without relying on people’s ability to self-report,” said Karim Kassam, assistant professor of social and decision sciences and lead author of the study. “It could be used to assess an individual’s emotional response to almost any kind of stimulus, for example, a flag, a brand name or a political candidate.”

One challenge for the research team was find a way to repeatedly and reliably evoke different emotional states from the participants. Traditional approaches, such as showing subjects emotion-inducing film clips, would likely have been unsuccessful because the impact of film clips diminishes with repeated display. The researchers solved the problem by recruiting actors from CMU’s School of Drama.

“Our big breakthrough was my colleague Karim Kassam’s idea of testing actors, who are experienced at cycling through emotional states. We were fortunate, in that respect, that CMU has a superb drama school,” said George Loewenstein, the Herbert A. Simon University Professor of Economics and Psychology.

For the study, 10 actors were scanned at CMU’s Scientific Imaging & Brain Research Center while viewing the words of nine emotions: anger, disgust, envy, fear, happiness, lust, pride, sadness and shame. While inside the fMRI scanner, the actors were instructed to enter each of these emotional states multiple times, in random order.

Another challenge was to ensure that the technique was measuring emotions per se, and not the act of trying to induce an emotion in oneself. To meet this challenge, a second phase of the study presented participants with pictures of neutral and disgusting photos that they had not seen before. The computer model, constructed from using statistical information to analyze the fMRI activation patterns gathered for 18 emotional words, had learned the emotion patterns from self-induced emotions. It was able to correctly identify the emotional content of photos being viewed using the brain activity of the viewers.

BrainActivityOnEmotions

To identify emotions within the brain, the researchers first used the participants’ neural activation patterns in early scans to identify the emotions experienced by the same participants in later scans. The computer model achieved a rank accuracy of 0.84. Rank accuracy refers to the percentile rank of the correct emotion in an ordered list of the computer model guesses; random guessing would result in a rank accuracy of 0.50.

Next, the team took the machine learning analysis of the self-induced emotions to guess which emotion the subjects were experiencing when they were exposed to the disgusting photographs.  The computer model achieved a rank accuracy of 0.91. With nine emotions to choose from, the model listed disgust as the most likely emotion 60 percent of the time and as one of its top two guesses 80 percent of the time.

Finally, they applied machine learning analysis of neural activation patterns from all but one of the participants to predict the emotions experienced by the hold-out participant. This answers an important question: If we took a new individual, put them in the scanner and exposed them to an emotional stimulus, how accurately could we identify their emotional reaction? Here, the model achieved a rank accuracy of 0.71, once again well above the chance guessing level of 0.50.

“Despite manifest differences between people’s psychology, different people tend to neurally encode emotions in remarkably similar ways,” noted Amanda Markey, a graduate student in the Department of Social and Decision Sciences.

A surprising finding from the research was that almost equivalent accuracy levels could be achieved even when the computer model made use of activation patterns in only one of a number of different subsections of the human brain.

“This suggests that emotion signatures aren’t limited to specific brain regions, such as the amygdala, but produce characteristic patterns throughout a number of brain regions,” said Vladimir Cherkassky, senior research programmer in the Psychology Department.

The research team also found that while on average the model ranked the correct emotion highest among its guesses, it was best at identifying happiness and least accurate in identifying envy. It rarely confused positive and negative emotions, suggesting that these have distinct neural signatures. And, it was least likely to misidentify lust as any other emotion, suggesting that lust produces a pattern of neural activity that is distinct from all other emotional experiences.

Just, the D.O. Hebb University Professor of Psychology, director of the university’s Center for Cognitive Brain Imaging and leading neuroscientist, explained, “We found that three main organizing factors underpinned the emotion neural signatures, namely the positive or negative valence of the emotion, its intensity — mild or strong, and its sociality — involvement or non-involvement of another person. This is how emotions are organized in the brain.”

In the future, the researchers plan to apply this new identification method to a number of challenging problems in emotion research, including identifying emotions that individuals are actively attempting to suppress and multiple emotions experienced simultaneously, such as the combination of joy and envy one might experience upon hearing about a friend’s good fortune.

Mar 11, 2013

Is virtual reality always an effective stressors for exposure treatments? Some insights from a controlled trial

Is virtual reality always an effective stressors for exposure treatments? Some insights from a controlled trial.

BMC psychiatry, 13(1) p. 52, 2013

Federica Pallavicini, Pietro Cipresso, Simona Raspelli, Alessandra Grassi, Silvia Serino, Cinzia Vigna, Stefano Triberti, Marco Villamira, Andrea Gaggioli, Giuseppe Riva

Abstract. Several research studies investigating the effectiveness of the different treatments have demonstrated that exposure-based therapies are more suitable and effective than others for the treatment of anxiety disorders. Traditionally, exposure may be achieved in two manners: in vivo, with direct contact to the stimulus, or by imagery, in the person’s imagination. However, despite its effectiveness, both types of exposure present some limitations that supported the use of Virtual Reality (VR). But is VR always an effective stressor? Are the technological breakdowns that may appear during such an experience a possible risk for its effectiveness? (...)

Full paper available here (open access)

Feb 08, 2013

Social sciences get into the Cloud

Scientific disciplines are usually classified in two broad categories: natural sciences and social sciences. Natural sciences investigate the physical, chemical and biological aspects of Earth, the Universe and the life forms that inhabit it. Social sciences (also defined human sciences) focus on the origin and development of human beings, societies, institutions, social relationships etc.

Natural sciences are often regarded as “hard” research disciplines, because they are based on precise numeric predictions about experimental data. Social sciences, on the other hand, are seen as “soft” because they tend to rely on more descriptive approaches to understand their object of study.

So, for example, while it has been possible to predict the existence and properties of the Higgs boson from the Standard Model of particle physics, it is not possible to predict the existence and properties of a psychological effect or phenomenon, at least with the same level of precision.

However, the most important difference between natural and social sciences is not in their final objective (since in both fields, hypotheses must be tested by empirical approaches), but in the methods and tools that they use to pursue that objective. Galileo Galilei argued that we cannot understand the universe “(…) if we do not first learn the language and grasp the symbols, in which it is written. This book is written in the mathematical language, and the symbols are triangles, circles and other geometrical figures, without whose help it is impossible to comprehend a single word of it; without which one wanders in vain through a dark labyrinth.”

200px-Galileo.arp.300pix.jpg

But unlike astronomy, physics and chemistry, which are able to read the “book of nature” using increasingly sophisticated glasses (such as microscopes, telescopes, etc.), social sciences have no such tools to investigate social and mental processes within their natural contexts. To understand these phenomena, researchers can either focus on macroscopic aggregates of behaviors (i.e. sociology) or analyse microscopic aspects within controlled settings (i.e. psychology).

Despite these limitations, there is no doubt that social sciences have produced interesting findings: today we know much more about the human being than we did a century ago. But at the same time, advances in natural sciences have been far more impressive and groundbreaking. From the discovery of atomic energy to the sequencing of human genome, natural sciences have changed our life and could do it even more in the next decades.

However, thanks to the explosive growth of information and communication technologies, this state of things may change soon and lead to a paradigm shift in the way social phenomena are investigated. Actually, thanks to the pervasive diffusion of Internet and mobile computing devices, most of our daily activities leave a digital footprint, which provide data on what we have done and with whom.

Every single hour of our life produces an observable trace, which can be translated into numbers and aggregated to identify specific patterns or trends that would be otherwise impossible to quantify.

Thanks to the emergence of cloud computing, we are now able to collect these digital footprints in large online databases, which can be accessed by researchers for scientific purposes. These databases represent for social scientists the “book written in the mathematical language”, which they can eventually read. An enormous amount of data is already available - embedded in online social networks, organizations digital archives, or saved in the internal memory of our smartphones/tablets/PCs – although it is not always accessible (because within the domain of private companies and government agencies).

BigData_mk_final.jpg

Social scientists are starting to realize that the advent of “big data” is offering unprecedented opportunities for advancing their disciplines. For example, Lazer and coll. recently published on Science (2009, 323:5915, pp. 721-723) a sort of “manifesto” of Computational Social Science, in which they explain the potential of this approach in collecting and analyzing data at a scale that may reveal patterns of individual and group behaviors.

However, in order to exploit this potential, social scientists have to open their minds to learn new tools, methods and approaches. Actually, the ability to analyse and make sense of huge quantities of data that change over time require mathematics and informatics skills that are usually not included in the training of the average social scientist. But acquiring new mathematical competences may not be enough. The majority of research psychologists, for example, is not familiar with using new technologies such as mobile computing tools, sensors or virtual environments. However, these tools may become the equivalent in psychology to what microscopes are for biology or telescopes are for astronomy.

If social scientists will open their minds to this new horizon, their impact on society could be at least as revolutionary as the one that natural scientists have produced in the last two centuries. The emergence of computational social science will not only allow scientists to predict many social phenomena, but also to unify levels of analysis that have been until now separately addressed, e.g. the neuro-psychological and the psycho-social levels. 

At the same time, the transformative potential of this emerging science requires also a careful reflection on its ethical implications for the protection of privacy of participants.   


18:29 Posted in Research tools | Permalink | Comments (0)

The Human Brain Project awarded $1.6 billion by the EU

At the end of January, the European Commission has officially announced the selection of the Human Brain Project (HBP) as one of its two FET Flagship projects. Federating more than 80 European and international research institutions, the Human Brain Project is planned to last ten years (2013-2023). The cost is estimated at 1.19 billion euros.

10cvtl5.jpg

The project is the first attempt to “reconstruct the brain piece by piece and building a virtual brain in a supercomputer”. Lead by neuroscientist Henry Markram, the project was launched in 2005 as a joint research initiative between the Brain Mind Institute at the École Polytechnique Fédérale de Lausanne (EPFL) and the information technology giant IBM.

supercomputer.533.jpg

Using the impressive processing power of IBM’s Blue Gene/L supercomputer, the project reached its first milestone in December 2006, simulating a rat cortical column. As of July 2012, Henry Markram’s team has achieved the simulation of mesocircuits containing approximately 1 million neurons and 1 billion synapses (which is comparable with the number of nerve cells present in a honey bee brain). The next step, planned in 2014, will be the modelling of a cellular rat brain, with 100 mesocircuits totalling a hundred million cells. Finally, the team plans to simulate a full human brain (86 billion neurons) by the year 2023

Watch the video overview of the Human Brain Project

Nov 11, 2012

The applification of health

11414511-mhealth-is-in-high-demand.jpg?w=600


Thanks to the accellerated diffusion of smartphones, the number of mobile healthcare apps has been growing exponentially in the past few years. Applications now exist to help patients managing diabetes, sharing information with peers, and monitoring mood, just to name a few examples.


Such “applification” of health is part of a larger trend called “mobile health” (or mHealth), which broadly refers to the provision of health-related services via wireless communications. Mobile health is a fast-growing market: according to a report by PEW Research as early as in 2011, 17 percent of mobile users were using their phones to look up health and medical information, and Juniper recently estimated that in the same year 44 million health apps were downloaded.


The field of mHealth has received a great deal of attention by the scientific community over the past few years, as evidenced by the number of conferences, workshops and publications dedicated to this subject; international healthcare institutions and organizations are also taking mHealth seriously.


For example, the UK Department of Health recently launched the crowdsourcing project Maps and Apps, to support the use of existing mobile phone apps and health information maps, as well as encourage people to put forward ideas for new ones. The initiative resulted in the collection of 500 health apps voted most popular by the public and health professionals, as well as a list of their ideas for new apps. At the moment of writing this post, the top-rated app is Moodscope, an application that allows users to measure, track and record comments on their mood. Other popular apps include HealthUnlocked, an online support network that connects people, volunteers and professionals to help learn, share and give practical support to one another, and FoodWiz.co, an application created by a mother of children with food allergies that which allows users to scan the bar codes on food to instantly find out which allergens are present. An app to help patients manage diabetes could not be missing from the list: Diabetes UK Tracker allows the patient to enter measurements such as blood glucose, caloric intake and weight, which can be displayed as graphs and shared with doctors; the software also features an area where patients can annotate medical information, personal feelings and thoughts.


mhealth.jpg


The astounding popularity of Maps and Apps initiative suggests the beginning of a new era in medical informatics, yet this emerging vision is not without caveats. As recently emphasized by Niall Boyce on the June issue of The Lancet Technology, the main concern associated with the use of apps as a self-management tool is the limited evidence of their effectivenes in improving health. Differently from other health interventions, mHealth apps have not been subject to rigorous testing. A potential reason for the lack of randomized evaluations is the fact that most of these apps reach consumers/patients directly, without passing through the traditional medical gatekeepers. However, as Boyce suggests, the availability of trial data would not only benefit patients, but also app developers, who could bring to the market more effective and reliable products. A further concern is related to privacy and security of medical data. Although most smartphone-based medical applications apply state-of-the-art secure protocols, the wireless utilization of these devices opens up new vulnerabilities to patients and medical facilities. A recent bulletin issued by the U.S. Department of Homeland Security lists five of the top mobile medical device security risks: 



  1. Insider: The most common ways employees steal data involves network transfer, be that email, remote access, or file transfer;

  2. Malware: These include keystroke loggers and Trojans, tailored to harvest easily accessible data once inside the network;

  3. Spearphishing: This highly-customized technique involves an email-based attack carrying malicious attack disguised as coming from a legitimate source, and seeking specific information;

  4. Web: DHS lists silent redirection, obfuscated JavaScript and search engine optimization poisoning among ways to penetrate a network then, ultimately, access an organization’s data;

  5. Lost equipment: A significant problem because it happens so frequently, even a smartphone in the wrong hands can be a gateway into a health entity’s network and records. And the more that patient information is stored electronically, the greater the number of people potentially affected when equipment is lost or stolen.


In conclusion, the “applification of healthcare” is at the same time a great opportunity for patients and a great responsibility medical professionals and developers. In order to exploit this opportunity while mitigating risks, it is essential to put in place quality evaluation procedures, which allow to monitor and optimize the effectiveness of these applications according to evidence-based standards. For example, iMedicalApps, provides independent reviews of mobile medical technology and applications by a team of physicians and medical students. Founded by founded by Dr. Iltifat Husain, an emergency medical resident at the Wake Forest University School of Medicine, iMedicalApps has been referred by Cochrane Collaboration as an evidence-based trusted Web 2.0 website.


More to explore:


Read the PVC report: Current and future state of mhealth (PDF FULL TEXT)


Watch the MobiHealthNews video report: What is mHealth?