Jul 18, 2013
Military Open Simulator Enterprise Strategy (MOSES) is secure virtual world software designed to evaluate the ability of OpenSimulator to provide independent access to a persistent, virtual world. MOSES is a research project of the United States Army Simulation and Training Center. STTC’s Virtual World Strategic Applications team uses OpenSimulator to add capability and flexibility to virtual training scenarios.
For the first time, scientists at Carnegie Mellon University have identified which emotion a person is experiencing based on brain activity.
The study, published in the June 19 issue of PLOS ONE, combines functional magnetic resonance imaging (fMRI) and machine learning to measure brain signals to accurately read emotions in individuals. Led by researchers in CMU’s Dietrich College of Humanities and Social Sciences, the findings illustrate how the brain categorizes feelings, giving researchers the first reliable process to analyze emotions. Until now, research on emotions has been long stymied by the lack of reliable methods to evaluate them, mostly because people are often reluctant to honestly report their feelings. Further complicating matters is that many emotional responses may not be consciously experienced.
Identifying emotions based on neural activity builds on previous discoveries by CMU’s Marcel Just and Tom M. Mitchell, which used similar techniques to create a computational model that identifies individuals’ thoughts of concrete objects, often dubbed “mind reading.”
“This research introduces a new method with potential to identify emotions without relying on people’s ability to self-report,” said Karim Kassam, assistant professor of social and decision sciences and lead author of the study. “It could be used to assess an individual’s emotional response to almost any kind of stimulus, for example, a flag, a brand name or a political candidate.”
One challenge for the research team was find a way to repeatedly and reliably evoke different emotional states from the participants. Traditional approaches, such as showing subjects emotion-inducing film clips, would likely have been unsuccessful because the impact of film clips diminishes with repeated display. The researchers solved the problem by recruiting actors from CMU’s School of Drama.
“Our big breakthrough was my colleague Karim Kassam’s idea of testing actors, who are experienced at cycling through emotional states. We were fortunate, in that respect, that CMU has a superb drama school,” said George Loewenstein, the Herbert A. Simon University Professor of Economics and Psychology.
For the study, 10 actors were scanned at CMU’s Scientific Imaging & Brain Research Center while viewing the words of nine emotions: anger, disgust, envy, fear, happiness, lust, pride, sadness and shame. While inside the fMRI scanner, the actors were instructed to enter each of these emotional states multiple times, in random order.
Another challenge was to ensure that the technique was measuring emotions per se, and not the act of trying to induce an emotion in oneself. To meet this challenge, a second phase of the study presented participants with pictures of neutral and disgusting photos that they had not seen before. The computer model, constructed from using statistical information to analyze the fMRI activation patterns gathered for 18 emotional words, had learned the emotion patterns from self-induced emotions. It was able to correctly identify the emotional content of photos being viewed using the brain activity of the viewers.
To identify emotions within the brain, the researchers first used the participants’ neural activation patterns in early scans to identify the emotions experienced by the same participants in later scans. The computer model achieved a rank accuracy of 0.84. Rank accuracy refers to the percentile rank of the correct emotion in an ordered list of the computer model guesses; random guessing would result in a rank accuracy of 0.50.
Next, the team took the machine learning analysis of the self-induced emotions to guess which emotion the subjects were experiencing when they were exposed to the disgusting photographs. The computer model achieved a rank accuracy of 0.91. With nine emotions to choose from, the model listed disgust as the most likely emotion 60 percent of the time and as one of its top two guesses 80 percent of the time.
Finally, they applied machine learning analysis of neural activation patterns from all but one of the participants to predict the emotions experienced by the hold-out participant. This answers an important question: If we took a new individual, put them in the scanner and exposed them to an emotional stimulus, how accurately could we identify their emotional reaction? Here, the model achieved a rank accuracy of 0.71, once again well above the chance guessing level of 0.50.
“Despite manifest differences between people’s psychology, different people tend to neurally encode emotions in remarkably similar ways,” noted Amanda Markey, a graduate student in the Department of Social and Decision Sciences.
A surprising finding from the research was that almost equivalent accuracy levels could be achieved even when the computer model made use of activation patterns in only one of a number of different subsections of the human brain.
“This suggests that emotion signatures aren’t limited to specific brain regions, such as the amygdala, but produce characteristic patterns throughout a number of brain regions,” said Vladimir Cherkassky, senior research programmer in the Psychology Department.
The research team also found that while on average the model ranked the correct emotion highest among its guesses, it was best at identifying happiness and least accurate in identifying envy. It rarely confused positive and negative emotions, suggesting that these have distinct neural signatures. And, it was least likely to misidentify lust as any other emotion, suggesting that lust produces a pattern of neural activity that is distinct from all other emotional experiences.
Just, the D.O. Hebb University Professor of Psychology, director of the university’s Center for Cognitive Brain Imaging and leading neuroscientist, explained, “We found that three main organizing factors underpinned the emotion neural signatures, namely the positive or negative valence of the emotion, its intensity — mild or strong, and its sociality — involvement or non-involvement of another person. This is how emotions are organized in the brain.”
In the future, the researchers plan to apply this new identification method to a number of challenging problems in emotion research, including identifying emotions that individuals are actively attempting to suppress and multiple emotions experienced simultaneously, such as the combination of joy and envy one might experience upon hearing about a friend’s good fortune.
Jun 05, 2013
I am really excited about this new interest in Virtual Reality, since I have been a fan of this technology for quite some time.
The latest news is that the virtual reality gaming device Omni launched a funding campaign Tuesday in Kickstarter and in few hours, more than doubled its goal of $150,000!
It's also a good news that VR hardware has been dropping dramaticaly in price in the last couple of years - I am thinking about at the incredible value-for-money of the Oculus Rift (dev kit: $300, consumer version: unknown - rumored <$300).
I had the privilege of trying out the Oculus (thanks to my friend Giuseppe, who got one in advance by supporting the project on Kickstarter) and I must say, it's just AMAZING.
The last time I had this feeling it was when I tested a CAVE system for the first time, during my internship at the CCVR. And now, 10 years after my first encounter with immersive VR, I am just as excited as then and look forward to try all novelties that are coming in very soon.
Dravet's syndrome is a rare genetic dysfunction of the brain with onset during the first year in an otherwise healthy infant. Despite the devastating nature of this condition, its rarity and relatively new distinction as its own syndrome mean that there is little formal research available.
I was introduced to this disease by a friend of mine and wanted to help in some way. If you also want to give a contribution, you can do it by getting in touch with the Dravet Syndrome Foundation or, if you are from Italy, with the Italian one.
Your donation will help fight this disease. Hope you can help - even a visit to these associations will do..
Jun 03, 2013
May 29, 2013
We hope you will join us in Los Angeles, California, on Saturday June 29th for attending the Symposium on Positive Technology.
The special session features interventions by key PT researchers and is a great opportunity to meet, share ideas and build the future of this exciting research field!
Download the conference program here (PDF)
May 26, 2013
Cross-Brain Neurofeedback: Scientific Concept and Experimental Platform.
PLoS One. 2013;8(5):e64590
Authors: Duan L, Liu WJ, Dai RN, Li R, Lu CM, Huang YX, Zhu CZ
Abstract. The present study described a new type of multi-person neurofeedback with the neural synchronization between two participants as the direct regulating target, termed as "cross-brain neurofeedback." As a first step to implement this concept, an experimental platform was built on the basis of functional near-infrared spectroscopy, and was validated with a two-person neurofeedback experiment. This novel concept as well as the experimental platform established a framework for investigation of the relationship between multiple participants' cross-brain neural synchronization and their social behaviors, which could provide new insight into the neural substrate of human social interactions.
A Hybrid Brain-Computer Interface-Based Mail Client.
Comput Math Methods Med. 2013;2013:750934
Authors: Yu T, Li Y, Long J, Li F
Abstract. Brain-computer interface-based communication plays an important role in brain-computer interface (BCI) applications; electronic mail is one of the most common communication tools. In this study, we propose a hybrid BCI-based mail client that implements electronic mail communication by means of real-time classification of multimodal features extracted from scalp electroencephalography (EEG). With this BCI mail client, users can receive, read, write, and attach files to their mail. Using a BCI mouse that utilizes hybrid brain signals, that is, motor imagery and P300 potential, the user can select and activate the function keys and links on the mail client graphical user interface (GUI). An adaptive P300 speller is employed for text input. The system has been tested with 6 subjects, and the experimental results validate the efficacy of the proposed method.
Using Music as a Signal for Biofeedback.
Int J Psychophysiol. 2013 Apr 23;
Authors: Bergstrom I, Seinfeld S, Arroyo-Palacios J, Slater M, Sanchez-Vives MV
Abstract. Studies on the potential benefits of conveying biofeedback stimulus using a musical signal have appeared in recent years with the intent of harnessing the strong effects that music listening may have on subjects. While results are encouraging, the fundamental question has yet to be addressed, of how combined music and biofeedback compares to the already established use of either of these elements separately. This experiment, involving young adults (N=24), compared the effectiveness at modulating participants' states of physiological arousal of each of the following conditions: A) listening to pre-recorded music, B) sonification biofeedback of the heart rate, and C) an algorithmically modulated musical feedback signal conveying the subject's heart rate. Our hypothesis was that each of the conditions (A), (B) and (C) would differ from the other two in the extent to which it enables participants to increase and decrease their state of physiological arousal, with (C) being more effective than (B), and both more than (A). Several physiological measures and qualitative responses were recorded and analyzed. Results show that using musical biofeedback allowed participants to modulate their state of physiological arousal at least equally well as sonification biofeedback, and much better than just listening to music, as reflected in their heart rate measurements, controlling for respiration-rate. Our findings indicate that the known effects of music in modulating arousal can therefore be beneficially harnessed when designing a biofeedback protocol.
Application of alpha/theta neurofeedback and heart rate variability training to young contemporary dancers: State anxiety and creativity.
Application of alpha/theta neurofeedback and heart rate variability training to young contemporary dancers: State anxiety and creativity.
Int J Psychophysiol. 2013 May 15;
Authors: Gruzelier JH, Thompson T, Redding E, Brandt R, Steffert T
Abstract. As one in a series on the impact of EEG-neurofeedback in the performing arts, we set out to replicate a previous dance study in which alpha/theta (A/T) neurofeedback and heart rate variability (HRV) biofeedback enhanced performance in competitive ballroom dancers compared with controls. First year contemporary dance conservatoire students were randomised to the same two psychophysiological interventions or a choreology instruction comparison group or a no-training control group. While there was demonstrable neurofeedback learning, there was no impact of the three interventions on dance performance as assessed by four experts. However, HRV training reduced anxiety and the reduction correlated with improved technique and artistry in performance; the anxiety scale items focussed on autonomic functions, especially cardiovascular activity. In line with the putative impact of hypnogogic training on creativity A/T training increased cognitive creativity with the test of unusual uses, but not insight problems. Methodological and theoretical implications are considered.
May 24, 2013
Are you interested in Positive Technology? Then come and join us on LinkedIn!
Our new group is the place to share expertise and brilliant ideas on positive applications of technology!
May 06, 2013
With the rapid adoption of mobile technologies and the proliferation of smartphones, new opportunities are emerging for the delivery of mental health services. And indeed, psychologists are starting to realize this potential: a recent survey by Luxton and coll. (2011) identified over 200 smartphone apps focused on behavioral health, covering a wide range of disorders, including developmental disorders, cognitive disorders, substance-related disorders as well as psychotic and mood disorders. These applications are used in behavioral health for several purposes, the most common of which are health education, assessment, homework and monitoring progress of treatment.
For example, T2 MoodTracker is an application that allows users to self-monitor, track and reference their emotional experience over a period of days, weeks and months using a visual analogue rating scale. Using this application, patients can self-monitor emotional experiences associated with common deployment-related behavioral health issues like post-traumatic stress, brain injury, life stress, depression and anxiety. Self-monitoring results can be used as self-help tool or they shared with a therapist or health care professional, providing a record of the patient’s emotional experience over a selected time frame.
Measuring objective correlatives of subjectively-reported emotional states is an important concern in research and clinical applications. Physiological and physical activity information provide mental health professionals with integrative measures, which can be used to improve understanding of patients’ self-reported feelings and emotions.
The combined use of wearable biosensors and smart phones offers unprecedented opportunities to collect, elaborate and transmit real-time body signals to the remote therapist. This approach is also useful to allow the patient collecting real-time information related to his/her health conditions and identifying specific trends. Insights gained by means of this feedback can empower the user to self-engage and manage his/her own health status, minimizing any interaction with other health care actors. One such tool is MyExperience, an open-source mobile platform that allows the combination of sensing and self-report to collect both quantitative and qualitative data on user experience and activity.
Other applications are designed to empower users with information for making better decisions, preventing life-style related conditions and preserving/enhancing cognitive performance. For example, BeWell monitors different user activities (sleep, physical activity, social interaction) and provides feedback to promote healthier lifestyle decisions.
Besides applications in mental health and wellbeing, smartphones are increasingly used in psychological research. The potential of this approach has been recently discussed by Geoffrey Miller in a review entitled “The Smartphone Psychology Manifesto”. According to Miller, smartphones can be effectively used to collect large quantities of ecologically valid data, in a easier and quicker way than other available research methodologies. Since the smartphone is becoming one of the most pervasive devices in our lives, it provides access to domains of behavioral data not previously available without either constant observation or reliance on self-reports only.
For example, the INTERSTRESS project, which I am coordinating, developed PsychLog, a psycho-physiological mobile data collection platform for mental health research. This free, open source experience sampling platform for Windows mobile allows collecting self-reported psychological data as well as ECG data via a bluetooth ECG sensor unit worn by the user. Althought PsychLog provides less features with respect to more advanced experience sampling platform, it can be easily configured also by researchers with no programming skills.
In summary, the use of smartphones can have a significant impact on both psychological research and practice. However, there is still limited evidence of the effectiveness of this approach. As for other mHealth applications, few controlled trials have tested the potential of mobile technology interventions in improving mental health care delivery processes. Therefore, further research is needed in order to determine the real cost-effectiveness of mobile cybertherapy applications.
Apr 05, 2013
Researchers at Vanderbilt University are studying the potential benefits of using human-looking robots as tools to help kids with autism spectrum disorder (ASD) improve their communication skills. The programmable NAO robot used in the study was developed by Aldebaran Robotics out of Paris, France, and offers the ability to be part of a larger, smarter system.
Though a child might feel like the pink eyed humanoid is an autonomous being, the NAO robot that the team is using is actually hooked up to computers and external cameras that track the kid’s movements. Using the newly developed ARIA (Adaptive Robot-Mediated Intervention Architecture) protocol, they found that children paid more attention to NAO and followed in exercises almost as well as with a human adult therapist.
Mar 11, 2013
Is virtual reality always an effective stressors for exposure treatments? Some insights from a controlled trial
Is virtual reality always an effective stressors for exposure treatments? Some insights from a controlled trial.
BMC psychiatry, 13(1) p. 52, 2013
Federica Pallavicini, Pietro Cipresso, Simona Raspelli, Alessandra Grassi, Silvia Serino, Cinzia Vigna, Stefano Triberti, Marco Villamira, Andrea Gaggioli, Giuseppe Riva
Abstract. Several research studies investigating the effectiveness of the different treatments have demonstrated that exposure-based therapies are more suitable and effective than others for the treatment of anxiety disorders. Traditionally, exposure may be achieved in two manners: in vivo, with direct contact to the stimulus, or by imagery, in the person’s imagination. However, despite its effectiveness, both types of exposure present some limitations that supported the use of Virtual Reality (VR). But is VR always an effective stressor? Are the technological breakdowns that may appear during such an experience a possible risk for its effectiveness? (...)
Full paper available here (open access)
Mar 03, 2013
“The new technology is a major breakthrough that has many advantages over current technology, which provides very limited functionality to patients with missing limbs,” Brånemark says.
Presently, robotic prostheses rely on electrodes over the skin to pick up the muscles electrical activity to drive few actions by the prosthesis. The problem with this approach is that normally only two functions are regained out of the tens of different movements an able-body is capable of. By using implanted electrodes, more signals can be retrieved, and therefore control of more movements is possible. Furthermore, it is also possible to provide the patient with natural perception, or “feeling”, through neural stimulation.
“We believe that implanted electrodes, together with a long-term stable human-machine interface provided by the osseointegrated implant, is a breakthrough that will pave the way for a new era in limb replacement,” says Rickard Brånemark.
Read full story
The Japanese communication robot destined to join the crew aboard the International Space Station (ISS) this summer recently underwent some zero gravity testing. The Kibo Robot Project, organized by Dentsu Inc. in response to a proposal made by the Japan Aerospace Exploration Agency, unveiled the final design of its diminutive humanoid robot and its Earthbound counterpart.
Watch the video:
1st International Workshop on Intelligent Digital Games for Empowerment and Inclusion
14 May 2013, Chania, Crete, Greece
chaired by Björn Schuller, Lucas Paletta, Nicolas Sabouret
Paper submission deadline: 11 March 2013
Digital Games for Empowerment and Inclusion possess the potential to change our society in a most positive way by preparing selected groups in a playful and fun way for their everyday life’s social and special situations. Exemplary domains span as far as from children with Autism Spectrum Condition to young adults preparing for their first job interviews or migrants familiarizing with their new environment. The current generation of such games thereby increasingly demands for computational intelligence algorithms to help analyze players’ behavior and monitor their motivation and interest to adapt game progress. The development of such games usually thus requires expertise from the general gaming domain, but in particular also from a game’s target domain, besides technological savoir-faire to provide intelligent analysis and reaction solutions. IDGEI 2013 aims at bridging across these communities and disciplines by inviting respective researchers and experts to discuss their latest perspectives and findings in the field of Intelligent Digital Games for Empowerment and Inclusion.
Suggested workshop topics include, but are by no means limited to:
- Machine Intelligence in Serious Games
- Mobile and Real-World Serious Gaming
- Emotion & Affect in Serious Games
- Player Behavior and Attention Modeling
- Player-Adaptation and Motivation
- Security & Privacy Preservation
- Novel Serious Games
- User Studies & Tests of Serious Games
Researchers at Duke University Medical Center in the US report in the February 28, 2013 issue of Scientific Reports the successful wiring together of sensory areas in the brains of two rats. The result of the experiment is that one rat will respond to the experiences to which the other is exposed.
The results of these projects suggest the future potential for linking multiple brains to form what the research team is calling an "organic computer," which could allow sharing of motor and sensory information among groups of animals.
"Our previous studies with brain-machine interfaces had convinced us that the rat brain was much more plastic than we had previously thought," said Miguel Nicolelis, M.D., PhD, lead author of the publication and professor of neurobiology at Duke University School of Medicine. "In those experiments, the rat brain was able to adapt easily to accept input from devices outside the body and even learn how to process invisible infrared light generated by an artificial sensor. So, the question we asked was, ‘if the brain could assimilate signals from artificial sensors, could it also assimilate information input from sensors from a different body?’"
To test this hypothesis, the researchers first trained pairs of rats to solve a simple problem: to press the correct lever when an indicator light above the lever switched on, which rewarded the rats with a sip of water. They next connected the two animals' brains via arrays of microelectrodes inserted into the area of the cortex that processes motor information.
One of the two rodents was designated as the "encoder" animal. This animal received a visual cue that showed it which lever to press in exchange for a water reward. Once this “encoder” rat pressed the right lever, a sample of its brain activity that coded its behavioral decision was translated into a pattern of electrical stimulation that was delivered directly into the brain of the second rat, known as the "decoder" animal.
The decoder rat had the same types of levers in its chamber, but it did not receive any visual cue indicating which lever it should press to obtain a reward. Therefore, to press the correct lever and receive the reward it craved, the decoder rat would have to rely on the cue transmitted from the encoder via the brain-to-brain interface.
The researchers then conducted trials to determine how well the decoder animal could decipher the brain input from the encoder rat to choose the correct lever. The decoder rat ultimately achieved a maximum success rate of about 70 percent, only slightly below the possible maximum success rate of 78 percent that the researchers had theorized was achievable based on success rates of sending signals directly to the decoder rat’s brain.
Importantly, the communication provided by this brain-to-brain interface was two-way. For instance, the encoder rat did not receive a full reward if the decoder rat made a wrong choice. The result of this peculiar contingency, said Nicolelis, led to the establishment of a "behavioral collaboration" between the pair of rats.
"We saw that when the decoder rat committed an error, the encoder basically changed both its brain function and behavior to make it easier for its partner to get it right," Nicolelis said. "The encoder improved the signal-to-noise ratio of its brain activity that represented the decision, so the signal became cleaner and easier to detect. And it made a quicker, cleaner decision to choose the correct lever to press. Invariably, when the encoder made those adaptations, the decoder got the right decision more often, so they both got a better reward."
In a second set of experiments, the researchers trained pairs of rats to distinguish between a narrow or wide opening using their whiskers. If the opening was narrow, they were taught to nose-poke a water port on the left side of the chamber to receive a reward; for a wide opening, they had to poke a port on the right side.
The researchers then divided the rats into encoders and decoders. The decoders were trained to associate stimulation pulses with the left reward poke as the correct choice, and an absence of pulses with the right reward poke as correct. During trials in which the encoder detected the opening width and transmitted the choice to the decoder, the decoder had a success rate of about 65 percent, significantly above chance.
To test the transmission limits of the brain-to-brain communication, the researchers placed an encoder rat in Brazil, at the Edmond and Lily Safra International Institute of Neuroscience of Natal (ELS-IINN), and transmitted its brain signals over the Internet to a decoder rat in Durham, N.C. They found that the two rats could still work together on the tactile discrimination task.
"So, even though the animals were on different continents, with the resulting noisy transmission and signal delays, they could still communicate," said Miguel Pais-Vieira, PhD, a postdoctoral fellow and first author of the study. "This tells us that it could be possible to create a workable, network of animal brains distributed in many different locations."
Nicolelis added, "These experiments demonstrated the ability to establish a sophisticated, direct communication linkage between rat brains, and that the decoder brain is working as a pattern-recognition device. So basically, we are creating an organic computer that solves a puzzle."
"But in this case, we are not inputting instructions, but rather only a signal that represents a decision made by the encoder, which is transmitted to the decoder’s brain which has to figure out how to solve the puzzle. So, we are creating a single central nervous system made up of two rat brains,” said Nicolelis. He pointed out that, in theory, such a system is not limited to a pair of brains, but instead could include a network of brains, or “brain-net.” Researchers at Duke and at the ELS-IINN are now working on experiments to link multiple animals cooperatively to solve more complex behavioral tasks.
"We cannot predict what kinds of emergent properties would appear when animals begin interacting as part of a brain-net. In theory, you could imagine that a combination of brains could provide solutions that individual brains cannot achieve by themselves," continued Nicolelis. Such a connection might even mean that one animal would incorporate another's sense of "self," he said.
"In fact, our studies of the sensory cortex of the decoder rats in these experiments showed that the decoder's brain began to represent in its tactile cortex not only its own whiskers, but the encoder rat's whiskers, too. We detected cortical neurons that responded to both sets of whiskers, which means that the rat created a second representation of a second body on top of its own." Basic studies of such adaptations could lead to a new field that Nicolelis calls the "neurophysiology of social interaction."
Such complex experiments will be enabled by the laboratory's ability to record brain signals from almost 2,000 brain cells at once. The researchers hope to record the electrical activity produced simultaneously by 10-30,000 cortical neurons in the next five years.
Such massive brain recordings will enable more precise control of motor neuroprostheses—such as those being developed by the Walk Again Project—to restore motor control to paralyzed people, Nicolelis said.
More to explore:
Sci. Rep. 3, 1319 (2013). PUBMED, , , &
Date: 17th, 18th and 19th July 2013
Location: Paris, France
The 2nd HCC summer school aims to share scientific knowledge and experience among participants, enhance and stimulate interdisciplinary dialogue as well as provide further opportunities for co-operation within the study domains of Human Computer Confluence.
The topics of the summer school will be framed around the following issues:
• re-experience yourself,
• experience being others,
• experience being together in more powerful ways,
• experience other environments,
• experience new senses,
• experience abstract data spaces.
The 2nd HCC summer school will try to benefit most from the research interests and the special facilities of the IRCAM institute, the last as a place dedicated to the coupling of art with the sciences of sound and media. Special attention will be given to the following thematic categories:
• Musical interfaces
• Interactive sound design
• Sensorimotor learning and gesture-sound interactive systems
• Croudsourcing and human computation approaches in artistic applications
The three-day summer school will include invited lectures by experts in the field, a round-table and practical workshops. During the workshops, participants will engage in hands-on HCC group projects that they will present at the end of the summer school.
• Isabelle Viaud-Delmon, Acoustic and cognitive spaces team, CNRS - IRCAM, France.
• Andrea Gaggioli, Department of Psychology, UCSC, Milan, Italy.
• Stephen Dunne, Neuroscience Department, STARLAB, Barcelona, Spain.
• Alois Ferscha, Pervasive computing lab, Johannes Kepler Universitat Linz, Austria.
• Fivos Maniatakos, Acoustic and Cognitive Spaces Group, IRCAM, France.
• Isabelle Viaud-Delmon, IRCAM
• Hugues Vinet, IRCAM
• Marine Taffou, IRCAM
• Sylvie Benoit, IRCAM
• Fivos Maniatakos, IRCAM
Feb 08, 2013
Scientific disciplines are usually classified in two broad categories: natural sciences and social sciences. Natural sciences investigate the physical, chemical and biological aspects of Earth, the Universe and the life forms that inhabit it. Social sciences (also defined human sciences) focus on the origin and development of human beings, societies, institutions, social relationships etc.
Natural sciences are often regarded as “hard” research disciplines, because they are based on precise numeric predictions about experimental data. Social sciences, on the other hand, are seen as “soft” because they tend to rely on more descriptive approaches to understand their object of study.
So, for example, while it has been possible to predict the existence and properties of the Higgs boson from the Standard Model of particle physics, it is not possible to predict the existence and properties of a psychological effect or phenomenon, at least with the same level of precision.
However, the most important difference between natural and social sciences is not in their final objective (since in both fields, hypotheses must be tested by empirical approaches), but in the methods and tools that they use to pursue that objective. Galileo Galilei argued that we cannot understand the universe “(…) if we do not first learn the language and grasp the symbols, in which it is written. This book is written in the mathematical language, and the symbols are triangles, circles and other geometrical figures, without whose help it is impossible to comprehend a single word of it; without which one wanders in vain through a dark labyrinth.”
But unlike astronomy, physics and chemistry, which are able to read the “book of nature” using increasingly sophisticated glasses (such as microscopes, telescopes, etc.), social sciences have no such tools to investigate social and mental processes within their natural contexts. To understand these phenomena, researchers can either focus on macroscopic aggregates of behaviors (i.e. sociology) or analyse microscopic aspects within controlled settings (i.e. psychology).
Despite these limitations, there is no doubt that social sciences have produced interesting findings: today we know much more about the human being than we did a century ago. But at the same time, advances in natural sciences have been far more impressive and groundbreaking. From the discovery of atomic energy to the sequencing of human genome, natural sciences have changed our life and could do it even more in the next decades.
However, thanks to the explosive growth of information and communication technologies, this state of things may change soon and lead to a paradigm shift in the way social phenomena are investigated. Actually, thanks to the pervasive diffusion of Internet and mobile computing devices, most of our daily activities leave a digital footprint, which provide data on what we have done and with whom.
Every single hour of our life produces an observable trace, which can be translated into numbers and aggregated to identify specific patterns or trends that would be otherwise impossible to quantify.
Thanks to the emergence of cloud computing, we are now able to collect these digital footprints in large online databases, which can be accessed by researchers for scientific purposes. These databases represent for social scientists the “book written in the mathematical language”, which they can eventually read. An enormous amount of data is already available - embedded in online social networks, organizations digital archives, or saved in the internal memory of our smartphones/tablets/PCs – although it is not always accessible (because within the domain of private companies and government agencies).
Social scientists are starting to realize that the advent of “big data” is offering unprecedented opportunities for advancing their disciplines. For example, Lazer and coll. recently published on Science (2009, 323:5915, pp. 721-723) a sort of “manifesto” of Computational Social Science, in which they explain the potential of this approach in collecting and analyzing data at a scale that may reveal patterns of individual and group behaviors.
However, in order to exploit this potential, social scientists have to open their minds to learn new tools, methods and approaches. Actually, the ability to analyse and make sense of huge quantities of data that change over time require mathematics and informatics skills that are usually not included in the training of the average social scientist. But acquiring new mathematical competences may not be enough. The majority of research psychologists, for example, is not familiar with using new technologies such as mobile computing tools, sensors or virtual environments. However, these tools may become the equivalent in psychology to what microscopes are for biology or telescopes are for astronomy.
If social scientists will open their minds to this new horizon, their impact on society could be at least as revolutionary as the one that natural scientists have produced in the last two centuries. The emergence of computational social science will not only allow scientists to predict many social phenomena, but also to unify levels of analysis that have been until now separately addressed, e.g. the neuro-psychological and the psycho-social levels.
At the same time, the transformative potential of this emerging science requires also a careful reflection on its ethical implications for the protection of privacy of participants.