Mar 03, 2013
Kibo space robot underwent zero gravity testing
Via Gizmag
The Japanese communication robot destined to join the crew aboard the International Space Station (ISS) this summer recently underwent some zero gravity testing. The Kibo Robot Project, organized by Dentsu Inc. in response to a proposal made by the Japan Aerospace Exploration Agency, unveiled the final design of its diminutive humanoid robot and its Earthbound counterpart.
Watch the video:
14:29 Posted in AI & robotics | Permalink | Comments (0)
Call for papers - International Workshop on Intelligent Digital Games for Empowerment and Inclusion
1st International Workshop on Intelligent Digital Games for Empowerment and Inclusion
Website: http://idgei.fdg2013.org/
14 May 2013, Chania, Crete, Greece
chaired by Björn Schuller, Lucas Paletta, Nicolas Sabouret
Paper submission deadline: 11 March 2013
Digital Games for Empowerment and Inclusion possess the potential to change our society in a most positive way by preparing selected groups in a playful and fun way for their everyday life’s social and special situations. Exemplary domains span as far as from children with Autism Spectrum Condition to young adults preparing for their first job interviews or migrants familiarizing with their new environment. The current generation of such games thereby increasingly demands for computational intelligence algorithms to help analyze players’ behavior and monitor their motivation and interest to adapt game progress. The development of such games usually thus requires expertise from the general gaming domain, but in particular also from a game’s target domain, besides technological savoir-faire to provide intelligent analysis and reaction solutions. IDGEI 2013 aims at bridging across these communities and disciplines by inviting respective researchers and experts to discuss their latest perspectives and findings in the field of Intelligent Digital Games for Empowerment and Inclusion.
Suggested workshop topics include, but are by no means limited to:
- Machine Intelligence in Serious Games
- Mobile and Real-World Serious Gaming
- Emotion & Affect in Serious Games
- Player Behavior and Attention Modeling
- Player-Adaptation and Motivation
- Security & Privacy Preservation
- Novel Serious Games
- User Studies & Tests of Serious Games
14:24 Posted in Positive Technology events, Serious games | Permalink | Comments (0)
Brain-to-brain communication between rats achieved
From Duke Medicine News and Communications
Researchers at Duke University Medical Center in the US report in the February 28, 2013 issue of Scientific Reports the successful wiring together of sensory areas in the brains of two rats. The result of the experiment is that one rat will respond to the experiences to which the other is exposed.
The results of these projects suggest the future potential for linking multiple brains to form what the research team is calling an "organic computer," which could allow sharing of motor and sensory information among groups of animals.
"Our previous studies with brain-machine interfaces had convinced us that the rat brain was much more plastic than we had previously thought," said Miguel Nicolelis, M.D., PhD, lead author of the publication and professor of neurobiology at Duke University School of Medicine. "In those experiments, the rat brain was able to adapt easily to accept input from devices outside the body and even learn how to process invisible infrared light generated by an artificial sensor. So, the question we asked was, ‘if the brain could assimilate signals from artificial sensors, could it also assimilate information input from sensors from a different body?’"
To test this hypothesis, the researchers first trained pairs of rats to solve a simple problem: to press the correct lever when an indicator light above the lever switched on, which rewarded the rats with a sip of water. They next connected the two animals' brains via arrays of microelectrodes inserted into the area of the cortex that processes motor information.
One of the two rodents was designated as the "encoder" animal. This animal received a visual cue that showed it which lever to press in exchange for a water reward. Once this “encoder” rat pressed the right lever, a sample of its brain activity that coded its behavioral decision was translated into a pattern of electrical stimulation that was delivered directly into the brain of the second rat, known as the "decoder" animal.
The decoder rat had the same types of levers in its chamber, but it did not receive any visual cue indicating which lever it should press to obtain a reward. Therefore, to press the correct lever and receive the reward it craved, the decoder rat would have to rely on the cue transmitted from the encoder via the brain-to-brain interface.
The researchers then conducted trials to determine how well the decoder animal could decipher the brain input from the encoder rat to choose the correct lever. The decoder rat ultimately achieved a maximum success rate of about 70 percent, only slightly below the possible maximum success rate of 78 percent that the researchers had theorized was achievable based on success rates of sending signals directly to the decoder rat’s brain.
Importantly, the communication provided by this brain-to-brain interface was two-way. For instance, the encoder rat did not receive a full reward if the decoder rat made a wrong choice. The result of this peculiar contingency, said Nicolelis, led to the establishment of a "behavioral collaboration" between the pair of rats.
"We saw that when the decoder rat committed an error, the encoder basically changed both its brain function and behavior to make it easier for its partner to get it right," Nicolelis said. "The encoder improved the signal-to-noise ratio of its brain activity that represented the decision, so the signal became cleaner and easier to detect. And it made a quicker, cleaner decision to choose the correct lever to press. Invariably, when the encoder made those adaptations, the decoder got the right decision more often, so they both got a better reward."
In a second set of experiments, the researchers trained pairs of rats to distinguish between a narrow or wide opening using their whiskers. If the opening was narrow, they were taught to nose-poke a water port on the left side of the chamber to receive a reward; for a wide opening, they had to poke a port on the right side.
The researchers then divided the rats into encoders and decoders. The decoders were trained to associate stimulation pulses with the left reward poke as the correct choice, and an absence of pulses with the right reward poke as correct. During trials in which the encoder detected the opening width and transmitted the choice to the decoder, the decoder had a success rate of about 65 percent, significantly above chance.
To test the transmission limits of the brain-to-brain communication, the researchers placed an encoder rat in Brazil, at the Edmond and Lily Safra International Institute of Neuroscience of Natal (ELS-IINN), and transmitted its brain signals over the Internet to a decoder rat in Durham, N.C. They found that the two rats could still work together on the tactile discrimination task.
"So, even though the animals were on different continents, with the resulting noisy transmission and signal delays, they could still communicate," said Miguel Pais-Vieira, PhD, a postdoctoral fellow and first author of the study. "This tells us that it could be possible to create a workable, network of animal brains distributed in many different locations."
Nicolelis added, "These experiments demonstrated the ability to establish a sophisticated, direct communication linkage between rat brains, and that the decoder brain is working as a pattern-recognition device. So basically, we are creating an organic computer that solves a puzzle."
"But in this case, we are not inputting instructions, but rather only a signal that represents a decision made by the encoder, which is transmitted to the decoder’s brain which has to figure out how to solve the puzzle. So, we are creating a single central nervous system made up of two rat brains,” said Nicolelis. He pointed out that, in theory, such a system is not limited to a pair of brains, but instead could include a network of brains, or “brain-net.” Researchers at Duke and at the ELS-IINN are now working on experiments to link multiple animals cooperatively to solve more complex behavioral tasks.
"We cannot predict what kinds of emergent properties would appear when animals begin interacting as part of a brain-net. In theory, you could imagine that a combination of brains could provide solutions that individual brains cannot achieve by themselves," continued Nicolelis. Such a connection might even mean that one animal would incorporate another's sense of "self," he said.
"In fact, our studies of the sensory cortex of the decoder rats in these experiments showed that the decoder's brain began to represent in its tactile cortex not only its own whiskers, but the encoder rat's whiskers, too. We detected cortical neurons that responded to both sets of whiskers, which means that the rat created a second representation of a second body on top of its own." Basic studies of such adaptations could lead to a new field that Nicolelis calls the "neurophysiology of social interaction."
Such complex experiments will be enabled by the laboratory's ability to record brain signals from almost 2,000 brain cells at once. The researchers hope to record the electrical activity produced simultaneously by 10-30,000 cortical neurons in the next five years.
Such massive brain recordings will enable more precise control of motor neuroprostheses—such as those being developed by the Walk Again Project—to restore motor control to paralyzed people, Nicolelis said.
More to explore:
Sci. Rep. 3, 1319 (2013). PUBMED
, , , &14:20 Posted in Brain-computer interface, Future interfaces | Permalink | Comments (0)
2nd Summer School on Human Computer Confluence
Date: 17th, 18th and 19th July 2013
Venue: IRCAM
Location: Paris, France
Website: http://www.ircam.fr/
The 2nd HCC summer school aims to share scientific knowledge and experience among participants, enhance and stimulate interdisciplinary dialogue as well as provide further opportunities for co-operation within the study domains of Human Computer Confluence.
The topics of the summer school will be framed around the following issues:
• re-experience yourself,
• experience being others,
• experience being together in more powerful ways,
• experience other environments,
• experience new senses,
• experience abstract data spaces.
The 2nd HCC summer school will try to benefit most from the research interests and the special facilities of the IRCAM institute, the last as a place dedicated to the coupling of art with the sciences of sound and media. Special attention will be given to the following thematic categories:
• Musical interfaces
• Interactive sound design
• Sensorimotor learning and gesture-sound interactive systems
• Croudsourcing and human computation approaches in artistic applications
The three-day summer school will include invited lectures by experts in the field, a round-table and practical workshops. During the workshops, participants will engage in hands-on HCC group projects that they will present at the end of the summer school.
Program committee
• Isabelle Viaud-Delmon, Acoustic and cognitive spaces team, CNRS - IRCAM, France.
• Andrea Gaggioli, Department of Psychology, UCSC, Milan, Italy.
• Stephen Dunne, Neuroscience Department, STARLAB, Barcelona, Spain.
• Alois Ferscha, Pervasive computing lab, Johannes Kepler Universitat Linz, Austria.
• Fivos Maniatakos, Acoustic and Cognitive Spaces Group, IRCAM, France.
Organisation committee
• Isabelle Viaud-Delmon, IRCAM
• Hugues Vinet, IRCAM
• Marine Taffou, IRCAM
• Sylvie Benoit, IRCAM
• Fivos Maniatakos, IRCAM
14:00 Posted in Future interfaces, Positive Technology events | Permalink | Comments (0)
Feb 08, 2013
Social sciences get into the Cloud
Scientific disciplines are usually classified in two broad categories: natural sciences and social sciences. Natural sciences investigate the physical, chemical and biological aspects of Earth, the Universe and the life forms that inhabit it. Social sciences (also defined human sciences) focus on the origin and development of human beings, societies, institutions, social relationships etc.
Natural sciences are often regarded as “hard” research disciplines, because they are based on precise numeric predictions about experimental data. Social sciences, on the other hand, are seen as “soft” because they tend to rely on more descriptive approaches to understand their object of study.
So, for example, while it has been possible to predict the existence and properties of the Higgs boson from the Standard Model of particle physics, it is not possible to predict the existence and properties of a psychological effect or phenomenon, at least with the same level of precision.
However, the most important difference between natural and social sciences is not in their final objective (since in both fields, hypotheses must be tested by empirical approaches), but in the methods and tools that they use to pursue that objective. Galileo Galilei argued that we cannot understand the universe “(…) if we do not first learn the language and grasp the symbols, in which it is written. This book is written in the mathematical language, and the symbols are triangles, circles and other geometrical figures, without whose help it is impossible to comprehend a single word of it; without which one wanders in vain through a dark labyrinth.”
But unlike astronomy, physics and chemistry, which are able to read the “book of nature” using increasingly sophisticated glasses (such as microscopes, telescopes, etc.), social sciences have no such tools to investigate social and mental processes within their natural contexts. To understand these phenomena, researchers can either focus on macroscopic aggregates of behaviors (i.e. sociology) or analyse microscopic aspects within controlled settings (i.e. psychology).
Despite these limitations, there is no doubt that social sciences have produced interesting findings: today we know much more about the human being than we did a century ago. But at the same time, advances in natural sciences have been far more impressive and groundbreaking. From the discovery of atomic energy to the sequencing of human genome, natural sciences have changed our life and could do it even more in the next decades.
However, thanks to the explosive growth of information and communication technologies, this state of things may change soon and lead to a paradigm shift in the way social phenomena are investigated. Actually, thanks to the pervasive diffusion of Internet and mobile computing devices, most of our daily activities leave a digital footprint, which provide data on what we have done and with whom.
Every single hour of our life produces an observable trace, which can be translated into numbers and aggregated to identify specific patterns or trends that would be otherwise impossible to quantify.
Thanks to the emergence of cloud computing, we are now able to collect these digital footprints in large online databases, which can be accessed by researchers for scientific purposes. These databases represent for social scientists the “book written in the mathematical language”, which they can eventually read. An enormous amount of data is already available - embedded in online social networks, organizations digital archives, or saved in the internal memory of our smartphones/tablets/PCs – although it is not always accessible (because within the domain of private companies and government agencies).
Social scientists are starting to realize that the advent of “big data” is offering unprecedented opportunities for advancing their disciplines. For example, Lazer and coll. recently published on Science (2009, 323:5915, pp. 721-723) a sort of “manifesto” of Computational Social Science, in which they explain the potential of this approach in collecting and analyzing data at a scale that may reveal patterns of individual and group behaviors.
However, in order to exploit this potential, social scientists have to open their minds to learn new tools, methods and approaches. Actually, the ability to analyse and make sense of huge quantities of data that change over time require mathematics and informatics skills that are usually not included in the training of the average social scientist. But acquiring new mathematical competences may not be enough. The majority of research psychologists, for example, is not familiar with using new technologies such as mobile computing tools, sensors or virtual environments. However, these tools may become the equivalent in psychology to what microscopes are for biology or telescopes are for astronomy.
If social scientists will open their minds to this new horizon, their impact on society could be at least as revolutionary as the one that natural scientists have produced in the last two centuries. The emergence of computational social science will not only allow scientists to predict many social phenomena, but also to unify levels of analysis that have been until now separately addressed, e.g. the neuro-psychological and the psycho-social levels.
At the same time, the transformative potential of this emerging science requires also a careful reflection on its ethical implications for the protection of privacy of participants.
18:29 Posted in Research tools | Permalink | Comments (0)
IEEE Special Issue: Technologies for Affect and Wellbeing
Technologies for Affect and Wellbeing- Special Issue of theIEEE Transaction on Affective Computing.
Guest Editors
- Rafael A. Calvo (The University of Sydney)
- Giuseppe Riva (ICE-NET Lab- Universitta Catolica del Sacro Cuore, Milan Italy)
- Christine Lisetti (Florida International University)
Background and Motivation
There is an increased interest in using computer interaction to detect and support users’ physical and psychological wellbeing. Computers can afford multiple forms of transformational experiences. Some of these experiences can be purposely designed to, for example, detect and regulate students’ affective states to improve aspects of their learning experiences. They can also be used in computer-based psychological interventions that treat psychological illness or that preventively promote wellbeing, healthy lifestyles, and mental health.
The application domain, so far referred to as ‘positive computing’, ‘positive technologies’, and ‘positive design’, draws on ideas from positive psychology, particularly the extensive research on developing human strengths and wellbeing. It is closely linked to the HCI work on personal informatics, and the development of tools that help people learn more about themselves through reflection.
This special issue will focus on ideas, methods and case studies for how affective computing can contribute to this goal. Articles should discuss how information that computers collect about our behaviour, cognition – and particularly affect can be used in the further understanding, nurturing or development of wellbeing and human strengths: e.g. self-understanding, empathy, intrinsic motivation toward wellbeing healthy lifestyles.
Topics include, but are not limited to:
- Systems to detect or support positive emotions and human strengths for example Reflection, Empathy, Happiness, Gratitude, Self-understanding/ interpersonal skills, Emotional intelligence/ emotion regulation, Social intelligence/ intrapersonal skills, Motivation.
- Using affect and motivation for physical and psychological health.
- Cyberpsychology for positive psychology and wellbeing
- HCI design strategies for support of wellbeing and human strengths
- Virtual Reality for support of wellbeing or human strengths
- Positive personal health informatics for health promotion
- Patient-centered technologies for healthy behaviour change
- Empathic intelligent virtual agents for lifestyle monitoring and behaviour change
- Mobile applications of affective computing for health and wellbeing
- Informatics technologies for patient empowerment
Timetable
- Call for Papers out: Feb 2013
- Submission Deadline: July 1st, 2013
- Notification of Acceptance: October 1st, 2013
- Final Manuscripts Due: December 1st, 2013
- Date of Publication: March or July 2014
Review process
The Transactions on Affective Computing Special Issue on “Affect and wellbeing” will consist of papers on techniques, methods, case studies and their evaluation. Some papers may survey various aspects of the topic, particularly in ways that bring the psychological, health and wellbeing, and technical literature together. The balance between these will be adjusted to maximize the impact of the special issue. All articles are expected to follow the standard review procedures for the IEEE TAC.
18:07 Posted in Call for papers, Positive Technology events | Permalink | Comments (0)
The Human Brain Project awarded $1.6 billion by the EU
At the end of January, the European Commission has officially announced the selection of the Human Brain Project (HBP) as one of its two FET Flagship projects. Federating more than 80 European and international research institutions, the Human Brain Project is planned to last ten years (2013-2023). The cost is estimated at 1.19 billion euros.
The project is the first attempt to “reconstruct the brain piece by piece and building a virtual brain in a supercomputer”. Lead by neuroscientist Henry Markram, the project was launched in 2005 as a joint research initiative between the Brain Mind Institute at the École Polytechnique Fédérale de Lausanne (EPFL) and the information technology giant IBM.
Using the impressive processing power of IBM’s Blue Gene/L supercomputer, the project reached its first milestone in December 2006, simulating a rat cortical column. As of July 2012, Henry Markram’s team has achieved the simulation of mesocircuits containing approximately 1 million neurons and 1 billion synapses (which is comparable with the number of nerve cells present in a honey bee brain). The next step, planned in 2014, will be the modelling of a cellular rat brain, with 100 mesocircuits totalling a hundred million cells. Finally, the team plans to simulate a full human brain (86 billion neurons) by the year 2023.
Watch the video overview of the Human Brain Project
Nov 11, 2012
The applification of health
Thanks to the accellerated diffusion of smartphones, the number of mobile healthcare apps has been growing exponentially in the past few years. Applications now exist to help patients managing diabetes, sharing information with peers, and monitoring mood, just to name a few examples.
Such “applification” of health is part of a larger trend called “mobile health” (or mHealth), which broadly refers to the provision of health-related services via wireless communications. Mobile health is a fast-growing market: according to a report by PEW Research as early as in 2011, 17 percent of mobile users were using their phones to look up health and medical information, and Juniper recently estimated that in the same year 44 million health apps were downloaded.
The field of mHealth has received a great deal of attention by the scientific community over the past few years, as evidenced by the number of conferences, workshops and publications dedicated to this subject; international healthcare institutions and organizations are also taking mHealth seriously.
For example, the UK Department of Health recently launched the crowdsourcing project Maps and Apps, to support the use of existing mobile phone apps and health information maps, as well as encourage people to put forward ideas for new ones. The initiative resulted in the collection of 500 health apps voted most popular by the public and health professionals, as well as a list of their ideas for new apps. At the moment of writing this post, the top-rated app is Moodscope, an application that allows users to measure, track and record comments on their mood. Other popular apps include HealthUnlocked, an online support network that connects people, volunteers and professionals to help learn, share and give practical support to one another, and FoodWiz.co, an application created by a mother of children with food allergies that which allows users to scan the bar codes on food to instantly find out which allergens are present. An app to help patients manage diabetes could not be missing from the list: Diabetes UK Tracker allows the patient to enter measurements such as blood glucose, caloric intake and weight, which can be displayed as graphs and shared with doctors; the software also features an area where patients can annotate medical information, personal feelings and thoughts.
The astounding popularity of Maps and Apps initiative suggests the beginning of a new era in medical informatics, yet this emerging vision is not without caveats. As recently emphasized by Niall Boyce on the June issue of The Lancet Technology, the main concern associated with the use of apps as a self-management tool is the limited evidence of their effectivenes in improving health. Differently from other health interventions, mHealth apps have not been subject to rigorous testing. A potential reason for the lack of randomized evaluations is the fact that most of these apps reach consumers/patients directly, without passing through the traditional medical gatekeepers. However, as Boyce suggests, the availability of trial data would not only benefit patients, but also app developers, who could bring to the market more effective and reliable products. A further concern is related to privacy and security of medical data. Although most smartphone-based medical applications apply state-of-the-art secure protocols, the wireless utilization of these devices opens up new vulnerabilities to patients and medical facilities. A recent bulletin issued by the U.S. Department of Homeland Security lists five of the top mobile medical device security risks:
- Insider: The most common ways employees steal data involves network transfer, be that email, remote access, or file transfer;
- Malware: These include keystroke loggers and Trojans, tailored to harvest easily accessible data once inside the network;
- Spearphishing: This highly-customized technique involves an email-based attack carrying malicious attack disguised as coming from a legitimate source, and seeking specific information;
- Web: DHS lists silent redirection, obfuscated JavaScript and search engine optimization poisoning among ways to penetrate a network then, ultimately, access an organization’s data;
- Lost equipment: A significant problem because it happens so frequently, even a smartphone in the wrong hands can be a gateway into a health entity’s network and records. And the more that patient information is stored electronically, the greater the number of people potentially affected when equipment is lost or stolen.
In conclusion, the “applification of healthcare” is at the same time a great opportunity for patients and a great responsibility medical professionals and developers. In order to exploit this opportunity while mitigating risks, it is essential to put in place quality evaluation procedures, which allow to monitor and optimize the effectiveness of these applications according to evidence-based standards. For example, iMedicalApps, provides independent reviews of mobile medical technology and applications by a team of physicians and medical students. Founded by founded by Dr. Iltifat Husain, an emergency medical resident at the Wake Forest University School of Medicine, iMedicalApps has been referred by Cochrane Collaboration as an evidence-based trusted Web 2.0 website.
More to explore:
Read the PVC report: Current and future state of mhealth (PDF FULL TEXT)
Watch the MobiHealthNews video report: What is mHealth?
23:27 Posted in Cybertherapy, Research tools, Self-Tracking, Wearable & mobile | Permalink | Comments (0)
Congenitally blind learn to see and read with soundscapes
Via KurzweilAI
Congenitally blind people have learned to ”see” and describe objects, and even identify letters and words, by using a visual-to-auditory sensory-substitution algorithm and sensory substitution devices (SSDs), scientists at Hebrew University and in France have found.
SSDs are non-invasive sensory aids that provide visual information to the blind via their existing senses. For example, using a visual-to-auditory SSD in a clinical or everyday setting, users wear a miniature camera connected to a small computer (or smart phone) and stereo headphones.
The images are converted into “soundscapes,” using an algorithm, allowing the user to listen to and then interpret the visual information coming from the camera. The blind participants using this device reach a level of visual acuity technically surpassing the criterion of the World Health Organization (WHO) for blindness.
Read the full story
20:39 Posted in Neurotechnology & neuroinformatics | Permalink | Comments (0)
Nov 02, 2012
Networked Flow: Towards an understanding of creative networks
Gaggioli A., Riva G., Milani L., Mazzoni E.
19:46 Posted in Creativity and computers, Social Media, Telepresence & virtual presence | Permalink | Comments (0)
Oct 27, 2012
Using Activity-Related Behavioural Features towards More Effective Automatic Stress Detection
Giakoumis D, Drosou A, Cipresso P, Tzovaras D, Hassapis G, Gaggioli A, Riva G.
PLoS One. 2012;7(9):e43571. doi: 10.1371/journal.pone.0043571. Epub 2012 Sep 19
This paper introduces activity-related behavioural features that can be automatically extracted from a computer system, with the aim to increase the effectiveness of automatic stress detection. The proposed features are based on processing of appropriate video and accelerometer recordings taken from the monitored subjects. For the purposes of the present study, an experiment was conducted that utilized a stress-induction protocol based on the stroop colour word test. Video, accelerometer and biosignal (Electrocardiogram and Galvanic Skin Response) recordings were collected from nineteen participants. Then, an explorative study was conducted by following a methodology mainly based on spatiotemporal descriptors (Motion History Images) that are extracted from video sequences. A large set of activity-related behavioural features, potentially useful for automatic stress detection, were proposed and examined. Experimental evaluation showed that several of these behavioural features significantly correlate to self-reported stress. Moreover, it was found that the use of the proposed features can significantly enhance the performance of typical automatic stress detection systems, commonly based on biosignal processing.
Full paper available here
13:36 Posted in Emotional computing, Research tools, Wearable & mobile | Permalink | Comments (0)
DARPA Robotics Challenge
Via KurzweilAI
DARPA has announced the start of the next DARPA Robotics Challenge. This time, the goal is to develop ground robots that perform complex tasks in "dangerous, degraded human-engineered environments". That means robots that perform humanitarian, disaster relief operations. The robots must use standard human hand tools and vehicles to navigate a debris field, open doors, climb ladders, and break through a concrete wall. Most but not all of the robots will be humanoid in design.
The challenge is divided into two parts with a Virtual Robotics Challenge scheduled for 10 - 24 June, 2013 to test simulated robots and the actual DARPA Robotics Challenge scheduled for 21 December, 2013. DARPA has adopted the free software Gazebo simulator, which supports ROS. There are two competition "tracks" - competitors in Track A will develop their own humanoid robot and control software, while competitors in Track B will develop control software that runs on a DARPA-supplied Atlas robot built by Boston Dynamics. Already University teams are making announcements of participation. Read on for more info about some of the teams, as well as some awesome photos and videos of the robots in action.
13:29 Posted in AI & robotics, Research institutions & funding opportunities | Permalink | Comments (0)
3D Projection Mapping
13:24 Posted in Augmented/mixed reality | Permalink | Comments (0)
Sep 03, 2012
Therapy in Virtual Environments - Clinical and Ethical Issues
Telemed J E Health. 2012 Jul 23;
Authors: Yellowlees PM, Holloway KM, Parish MB
Abstract. Background: As virtual reality and computer-assisted therapy strategies are increasingly implemented for the treatment of psychological disorders, ethical standards and guidelines must be considered. This study determined a set of ethical and legal guidelines for treatment of post-traumatic stress disorder (PTSD)/traumatic brain injury (TBI) in a virtual environment incorporating the rights of an individual who is represented by an avatar. Materials and Methods: A comprehensive literature review was undertaken. An example of a case study of therapy in Second Life (a popular online virtual world developed by Linden Labs) was described. Results: Ethical and legal considerations regarding psychiatric treatment of PTSD/TBI in a virtual environment were examined. The following issues were described and discussed: authentication of providers and patients, informed consent, patient confidentiality, patient well-being, clinician competence (licensing and credentialing), training of providers, insurance for providers, the therapeutic environment, and emergencies. Ethical and legal guidelines relevant to these issues in a virtual environment were proposed. Conclusions: Ethical and legal issues in virtual environments are similar to those that occur in the in-person world. Individuals represented by an avatar have the rights equivalent to the individual and should be treated as such.
20:04 Posted in Cybertherapy, Ethics of technology, Virtual worlds | Permalink | Comments (0)
Aug 04, 2012
Extending body space in immersive virtual reality: a very long arm illusion
PLoS One. 2012;7(7):e40867
Authors: Kilteni K, Normand JM, Sanchez-Vives MV, Slater M
Abstract. Recent studies have shown that a fake body part can be incorporated into human body representation through synchronous multisensory stimulation on the fake and corresponding real body part - the most famous example being the Rubber Hand Illusion. However, the extent to which gross asymmetries in the fake body can be assimilated remains unknown. Participants experienced, through a head-tracked stereo head-mounted display a virtual body coincident with their real body. There were 5 conditions in a between-groups experiment, with 10 participants per condition. In all conditions there was visuo-motor congruence between the real and virtual dominant arm. In an Incongruent condition (I), where the virtual arm length was equal to the real length, there was visuo-tactile incongruence. In four Congruent conditions there was visuo-tactile congruence, but the virtual arm lengths were either equal to (C1), double (C2), triple (C3) or quadruple (C4) the real ones. Questionnaire scores and defensive withdrawal movements in response to a threat showed that the overall level of ownership was high in both C1 and I, and there was no significant difference between these conditions. Additionally, participants experienced ownership over the virtual arm up to three times the length of the real one, and less strongly at four times the length. The illusion did decline, however, with the length of the virtual arm. In the C2-C4 conditions although a measure of proprioceptive drift positively correlated with virtual arm length, there was no correlation between the drift and ownership of the virtual arm, suggesting different underlying mechanisms between ownership and drift. Overall, these findings extend and enrich previous results that multisensory and sensorimotor information can reconstruct our perception of the body shape, size and symmetry even when this is not consistent with normal body proportions.
20:03 Posted in Telepresence & virtual presence, Virtual worlds | Permalink | Comments (0)
The therapeutic relationship in e-therapy for mental health: a systematic review
J Med Internet Res. 2012;14(4):e110
Authors: Sucala M, Schnur JB, Constantino MJ, Miller SJ, Brackman EH, Montgomery GH
Abstract BACKGROUND: E-therapy is defined as a licensed mental health care professional providing mental health services via e-mail, video conferencing, virtual reality technology, chat technology, or any combination of these. The use of e-therapy has been rapidly expanding in the last two decades, with growing evidence suggesting that the provision of mental health services over the Internet is both clinically efficacious and cost effective. Yet there are still unanswered concerns about e-therapy, including whether it is possible to develop a successful therapeutic relationship over the Internet in the absence of nonverbal cues. OBJECTIVE: Our objective in this study was to systematically review the therapeutic relationship in e-therapy. METHODS: We searched PubMed, PsycINFO, and CINAHL through August 2011. Information on study methods and results was abstracted independently by the authors using a standardized form.
RESULTS: From the 840 reviewed studies, only 11 (1.3%) investigated the therapeutic relationship. The majority of the reviewed studies were focused on the therapeutic alliance-a central element of the therapeutic relationship. Although the results do not allow firm conclusions, they indicate that e-therapy seems to be at least equivalent to face-to-face therapy in terms of therapeutic alliance, and that there is a relationship between the therapeutic alliance and e-therapy outcome. CONCLUSIONS: Overall, the current literature on the role of therapeutic relationship in e-therapy is scant, and much more research is needed to understand the therapeutic relationship in online environments.
20:01 Posted in Cybertherapy | Permalink | Comments (0)
Visualising the Emotional Response to LONDON 2012
Reblogged from InfoAesthetics
The Emoto visualization by the illustrious Moritz Stefaner, FutureEverything, and Studio NAND tracks twitter for themes related to the Games, analyzes the messages for content and emotional expressions, and visualizes topics and tone of the conversation. You can find out which topics are most discussed, or see all current messages related to the Games.
Emoto uses an origami looking glyph to represent the emotive summarization for each topic. The glyphs reshape and rearrange themselves based on realtime status. You can also view an overview of each day.
After the games, a set of data sculptures will be made to capture the collective emotion of the 2012 Olympics.
Kim Rees is a partner at Periscopic, a socially-responsible data visualization firm.
19:46 Posted in Information visualization | Permalink | Comments (0)
Is Little Printer the next (little) thing?
Yes! Definitely yes!
Little Printer lives in your home, bringing you news, puzzles and gossip from your friends. Use your smartphone to set up subscriptions and Little Printer will gather them together to create a timely, beautiful mini-newspaper.
For more see:
bergcloud.com/littleprinter/
19:34 Posted in Blue sky | Permalink | Comments (0)
The Age of ‘Wearatronics’
Medgadget has an interesting article on the raise of ‘Wearatronics’, a new trend in which new materials and interconnects have made circuit assemblies flexible and, as a result, embeddable. Such flexible electronic arrays may be embedded into textiles in order to, for example, measure the wearer’s vital signs or even generate and store power.
According to GigaOm's research report The wearable-computing market: a global analysis the wearatronics market for just health and fitness products is estimated to reach 170 million devices within the next five years.
In this video, Bloomberg's Sheila Dharmarajan reports on the outlook for wearable electronics on Bloomberg Television's "Bloomberg West." (Source: Bloomberg)
Also interesting is this TED speech from David Icke, who creates breathable, implantable microcomputers that conform to the human body, which can be used for a variety of medical applications.
18:10 Posted in Future interfaces, Pervasive computing, Wearable & mobile | Permalink | Comments (0)
A Real-Time fMRI-Based Spelling Device Immediately Enabling Robust Motor-Independent Communication
Curr Biol. 2012 Jun 27;
Authors: Sorger B, Reithler J, Dahmen B, Goebel R
Abstract. Human communication entirely depends on the functional integrity of the neuromuscular system. This is devastatingly illustrated in clinical conditions such as the so-called locked-in syndrome (LIS) [1], in which severely motor-disabled patients become incapable to communicate naturally-while being fully conscious and awake. For the last 20 years, research on motor-independent communication has focused on developing brain-computer interfaces (BCIs) implementing neuroelectric signals for communication (e.g., [2-7]), and BCIs based on electroencephalography (EEG) have already been applied successfully to concerned patients [8-11]. However, not all patients achieve proficiency in EEG-based BCI control [12]. Thus, more recently, hemodynamic brain signals have also been explored for BCI purposes [13-16]. Here, we introduce the first spelling device based on fMRI. By exploiting spatiotemporal characteristics of hemodynamic responses, evoked by performing differently timed mental imagery tasks, our novel letter encoding technique allows translating any freely chosen answer (letter-by-letter) into reliable and differentiable single-trial fMRI signals. Most importantly, automated letter decoding in real time enables back-and-forth communication within a single scanning session. Because the suggested spelling device requires only little effort and pretraining, it is immediately operational and possesses high potential for clinical applications, both in terms of diagnostics and establishing short-term communication with nonresponsive and severely motor-impaired patients.
17:55 | Permalink | Comments (0)