Ok

By continuing your visit to this site, you accept the use of cookies. These ensure the smooth running of our services. Learn more.

Aug 07, 2013

What Color Is Your Night Light? It May Affect Your Mood

When it comes to some of the health hazards of light at night, a new study suggests that the color of the light can make a big difference.

Read full story on Science Daily

Phubbing: the war against anti-social phone use

Via Textually.org

Screen Shot 2013-08-06 at 9.03.46 AM.png

Don't you just hate it when someone snubs you by looking at their phone instead of paying attention? The Stop Phubbing campaign group certainly does. The Guardian reports.

In a list of "Disturbing Phubbing Stats" on their website, of note:

-- If phubbing were a plague it would decimate six Chinas

-- 97% of people claim their food tasted worse while being a victim of phubbing

-- 92% of repeat phubbers go on to become politicians

So it's really just a joke site? Well, a joke site with a serious message about our growing estrangement from our fellow human beings. But mostly a joke site, yes.

Read full article.

The Computer Game That Helps Therapists Chat to Adolescents With Mental Health Problems

Via MIT Technology Review

Adolescents with mental health problems are particularly hard for therapists to engage. But a new computer game is providing a healthy conduit for effective communication between them.

Read the full story on MIT Technology Review

OpenGlass Project Makes Google Glass Useful for the Visually Impaired

Re-blogged from Medgadget

Google Glass may have been developed to transform the way people see the world around them, but thanks to Dapper Vision’s OpenGlass Project, one doesn’t even need to be able to see to experience the Silicon Valley tech giant’s new spectacles.

Harnessing the power of Google Glass’ built-in camera, the cloud, and the “hive-mind”, visually impaired users will be able to know what’s in front of them. The system consists of two components: Question-Answer sends pictures taken by the user and uploads them to Amazon’s Mechanical Turk and Twitter for the public to help identify, and Memento takes video from Glass and uses image matching to identify objects from a database created with the help of seeing users. Information about what the Glass wearer “sees” is read aloud to the user via bone conduction speakers.

Here’s a video that explains more about how it all works:

Pupil responses allow communication in locked-in syndrome patients

Pupil responses allow communication in locked-in syndrome patients.

Josef Stoll et al., Current Biology, Volume 23, Issue 15, R647-R648, 5 August 2013

For patients with severe motor disabilities, a robust means of communication is a crucial factor for their well-being. We report here that pupil size measured by a bedside camera can be used to communicate with patients with locked-in syndrome. With the same protocol we demonstrate command-following for a patient in a minimally conscious state, suggesting its potential as a diagnostic tool for patients whose state of consciousness is in question. Importantly, neither training nor individual adjustment of our system’s decoding parameters were required for successful decoding of patients’ responses.

Paper full text PDF

dilated pupil Pupils Used to Communicate with People with Locked In Syndrome

Image credit: Flickr user Beth77

German test of the controllability of motor imagery in older adults

German test of the controllability of motor imagery in older adults.

Z Gerontol Geriatr. 2013 Aug 3;

Authors: Schott N

Abstract. After a person is instructed to imagine a certain movement, no possibility exists to control whether the person is doing what they are asked for. The purpose of this study was to validate the German Test of the Controllability of Motor Imagery ("Tests zur Kontrollierbarkeit von Bewegungsvorstellungen" TKBV). A total sample of 102 men [mean 55.6, standard deviation (SD) 25.1] and 93 women (mean 59.2, SD 24.0) ranging in age from 18-88 years completed the TKBV. Two conditions were performed: a recognition (REC) and a regeneration (REG) test. In both conditions the participants had to perform the six consecutive instructions. They were asked to imagine the posture of their own body. Subjects had to move only one body part (head, arms, legs, trunk) per instruction. On the regeneration test the participants had to actually produce the final position. On the recognition test, they were required to select the one picture among five pictures, which fit the imagery they have. Explorative factor analysis showed the proposed two-dimensional solution: (1) the ability to control their body scheme, and (2) the ability to transform a visual imagery. Cronbach's α of the two dimensions of the TKBV were 0.89 and 0.73, respectively. The scales correlate low with convergent measures assessing mental chronometry (Timed-Up-and-Go test, rREG = - 0.33, rREC = - 0.31), and the vividness of motor imagery (MIQvis, rREG = 0.14, rREC = 0.14; MIQkin, rREG = 0.11, rREC = 0.13). Criterion validity of the TKBV was established by statistically significant correlations between the subscales, the Corsi block tapping test (BTT, rREG = 0.45, rREC = 0.38) and with physical activity (rREG = 0.50, rREC = 0.36). The TKBV is a valid instrument to assess motor imagery. Thus, it is an important and helpful tool in the neurologic and orthopedic rehabilitation.

Detecting delay in visual feedback of an action as a monitor of self recognition

Detecting delay in visual feedback of an action as a monitor of self recognition.

Exp Brain Res. 2012 Oct;222(4):389-97

Authors: Hoover AE, Harris LR

Abstract. How do we distinguish "self" from "other"? The correlation between willing an action and seeing it occur is an important cue. We exploited the fact that this correlation needs to occur within a restricted temporal window in order to obtain a quantitative assessment of when a body part is identified as "self". We measured the threshold and sensitivity (d') for detecting a delay between movements of the finger (of both the dominant and non-dominant hands) and visual feedback as seen from four visual perspectives (the natural view, and mirror-reversed and/or inverted views). Each trial consisted of one presentation with minimum delay and another with a delay of between 33 and 150 ms. Participants indicated which presentation contained the delayed view. We varied the amount of efference copy available for this task by comparing performances for discrete movements and continuous movements. Discrete movements are associated with a stronger efference copy. Sensitivity to detect asynchrony between visual and proprioceptive information was significantly higher when movements were viewed from a "plausible" self perspective compared with when the view was reversed or inverted. Further, we found differences in performance between dominant and non-dominant hand finger movements across the continuous and single movements. Performance varied with the viewpoint from which the visual feedback was presented and on the efferent component such that optimal performance was obtained when the presentation was in the normal natural orientation and clear efferent information was available. Variations in sensitivity to visual/non-visual temporal incongruence with the viewpoint in which a movement is seen may help determine the arrangement of the underlying visual representation of the body.

First Google Glass use for real-time location of where multiple viewers are looking

Hive-mind solves tasks using Google Glass ant game

Re-blogged from New Scientist

Glass could soon be used for more than just snapping pics of your lunchtime sandwich. A new game will connect Glass wearers to a virtual ant colony vying for prizes by solving real-world problems that vex traditional crowdsourcing efforts.

Crowdsourcing is most famous for collaborative projects like Wikipedia and "games with a purpose" like FoldIt, which turns the calculations involved in protein folding into an online game. All require users to log in to a specific website on their PC.

Now Daniel Estrada of the University of Illinois in Urbana-Champaign and Jonathan Lawhead of Columbia University in New York are seeking to bring crowdsourcing to Google's wearable computer, Glass.

The pair have designed a game called Swarm! that puts a Glass wearer in the role of an ant in a colony. Similar to the pheromone trails laid down by ants, players leave virtual trails on a map as they move about. These behave like real ant trails, fading away with time unless reinforced by other people travelling the same route. Such augmented reality games already exist – Google's Ingress, for one – but in Swarm! the tasks have real-world applications.

Swarm! players seek out virtual resources to benefit their colony, such as food, and must avoid crossing the trails of other colony members. They can also monopolise a resource pool by taking photos of its real-world location.

To gain further resources for their colony, players can carry out real-world tasks. For example, if the developers wanted to create a map of the locations of every power outlet in an airport, they could reward players with virtual food for every photo of a socket they took. The photos and location data recorded by Glass could then be used to generate a map that anyone could use. Such problems can only be solved by people out in the physical world, yet the economic incentives aren't strong enough for, say, the airport owner to provide such a map.

Estrada and Lawhead hope that by turning tasks such as these into games, Swarm! will capture the group intelligence ant colonies exhibit when they find the most efficient paths between food sources and the home nest.

Read full story

International Conference on Physiological Computing Systems

7-9 January 2014, Lisbon, Portugal

http://www.phycs.org/

Physiological data in its different dimensions, either bioelectrical, biomechanical, biochemical or biophysical, and collected through specialized biomedical devices, video and image capture or other sources, is opening new boundaries in the field of human-computer interaction into what can be defined as Physiological Computing. PhyCS is the annual meeting of the physiological interaction and computing community, and serves as the main international forum for engineers, computer scientists and health professionals, interested in outstanding research and development that bridges the gap between physiological data handling and human-computer interaction.


Regular Paper Submission Extension: September 15, 2013
Regular Paper Authors Notification: October 23, 2013
Regular Paper Camera Ready and Registration: November 5, 2013

Jul 30, 2013

How it feels (through Google Glass)


Jul 23, 2013

A mobile data collection platform for mental health research

A mobile data collection platform for mental health research

Personal and Ubiquitous Computing (2013), Volume 17, Issue 2, pp 241-251.
 
A. Gaggioli, G. Pioggia, G. Tartarisco, G. Baldus, D. Corda, P. Cipresso, G. Riva
 
Ubiquitous computing technologies offer exciting new possibilities for monitoring and analyzing user’s experience in real time. In this paper, we describe the design and development of Psychlog, a mobile phone platform designed to collect users’ psychological, physiological, and activity information for mental health research. The tool allows administering self-report questionnaires at specific times or randomly within a day. The system also permits to collect heart rate and activity information from a wireless electrocardiogram equipped with a three-axial accelerometer. By combining self-reports with heart rate and activity data, the application makes it possible to investigate the relationship between psychological, physiological, and behavioral variables, as well as to monitor their fluctuations over time. The software runs on Windows mobile operative system and is available as open source (http://sourceforge.net/projects/psychlog/).
 
Read the full text.


Augmented Reality - Projection Mapping

Augmented Reality - Projection Mapping from Dane Luttik on Vimeo.

The beginning of infinity

A little old video but still inspiring...

THE BEGINNING OF INFINITY from Jason Silva on Vimeo.

22:46 Posted in Blue sky | Permalink | Comments (0)

Neural Reorganization Accompanying Upper Limb Motor Rehabilitation from Stroke with Virtual Reality-Based Gesture Therapy

Neural Reorganization Accompanying Upper Limb Motor Rehabilitation from Stroke with Virtual Reality-Based Gesture Therapy.

Top Stroke Rehabil. 2013 May-June 1;20(3):197-209

Authors: Orihuela-Espina F, Fernández Del Castillo I, Palafox L, Pasaye E, Sánchez-Villavicencio I, Leder R, Franco JH, Sucar LE

Background: Gesture Therapy is an upper limb virtual reality rehabilitation-based therapy for stroke survivors. It promotes motor rehabilitation by challenging patients with simple computer games representative of daily activities for self-support. This therapy has demonstrated clinical value, but the underlying functional neural reorganization changes associated with this therapy that are responsible for the behavioral improvements are not yet known. Objective: We sought to quantify the occurrence of neural reorganization strategies that underlie motor improvements as they occur during the practice of Gesture Therapy and to identify those strategies linked to a better prognosis. Methods: Functional magnetic resonance imaging (fMRI) neuroscans were longitudinally collected at 4 time points during Gesture Therapy administration to 8 patients. Behavioral improvements were monitored using the Fugl-Meyer scale and Motricity Index. Activation loci were anatomically labelled and translated to reorganization strategies. Strategies are quantified by counting the number of active clusters in brain regions tied to them. Results: All patients demonstrated significant behavioral improvements (P < .05). Contralesional activation of the unaffected motor cortex, cerebellar recruitment, and compensatory prefrontal cortex activation were the most prominent strategies evoked. A strong and significant correlation between motor dexterity upon commencing therapy and total recruited activity was found (r2 = 0.80; P < .05), and overall brain activity during therapy was inversely related to normalized behavioral improvements (r2 = 0.64; P < .05). Conclusions: Prefrontal cortex and cerebellar activity are the driving forces of the recovery associated with Gesture Therapy. The relation between behavioral and brain changes suggests that those with stronger impairment benefit the most from this paradigm.

Illusory ownership of a virtual child body causes overestimation of object sizes and implicit attitude changes

Illusory ownership of a virtual child body causes overestimation of object sizes and implicit attitude changes.

Proc Natl Acad Sci USA. 2013 Jul 15;

Authors: Banakou D, Groten R, Slater M

Abstract. An illusory sensation of ownership over a surrogate limb or whole body can be induced through specific forms of multisensory stimulation, such as synchronous visuotactile tapping on the hidden real and visible rubber hand in the rubber hand illusion. Such methods have been used to induce ownership over a manikin and a virtual body that substitute the real body, as seen from first-person perspective, through a head-mounted display. However, the perceptual and behavioral consequences of such transformed body ownership have hardly been explored. In Exp. 1, immersive virtual reality was used to embody 30 adults as a 4-y-old child (condition C), and as an adult body scaled to the same height as the child (condition A), experienced from the first-person perspective, and with virtual and real body movements synchronized. The result was a strong body-ownership illusion equally for C and A. Moreover there was an overestimation of the sizes of objects compared with a nonembodied baseline, which was significantly greater for C compared with A. An implicit association test showed that C resulted in significantly faster reaction times for the classification of self with child-like compared with adult-like attributes. Exp. 2 with an additional 16 participants extinguished the ownership illusion by using visuomotor asynchrony, with all else equal. The size-estimation and implicit association test differences between C and A were also extinguished. We conclude that there are perceptual and probably behavioral correlates of body-ownership illusions that occur as a function of the type of body in which embodiment occurs.

The Size of Electronic Consumer Devices Affects Our Behavior

Re-blogged from Textually.org

3014093-inline-i-1-how-your-iphone-weakens-your-will.jpg

What are our devices doing to us? We already know they're snuffing our creativity - but new research suggests they're also stifling our drive. How so? Because fussing with them on average 58 minutes a day leads to bad posture, FastCompany reports.

 

The body posture inherent in operating everyday gadgets affects not only your back, but your demeanor, reports a new experimental study entitled iPosture: The Size of Electronic Consumer Devices Affects Our Behavior. It turns out that working on a relatively large machine (like a desktop computer) causes users to act more assertively than working on a small one (like an iPad).

That poor posture, Harvard Business School researchers Maarten Bos and Amy Cuddy find, undermines our assertiveness.

Read more.

SENSUS Transcutaneous Pain Management System Approved for Use During Sleep

Via Medgadget

NeuroMetrix of out of Waltham, MA received FDA clearance for its SENSUS Pain Management System to be used by patients during sleep. This is the first transcutaneous electrical nerve stimulation system to receive a sleep indication from the FDA for pain control.

The device is designed for use by diabetics and others with chronic pain in the legs and feet. It’s worn around one or both legs and delivers an electrical current to disrupt pain signals being sent up to the brain.

neurometrix sensus SENSUS Transcutaneous Pain Management System Approved for Use During Sleep

Digital Assistance for Sign-Language Users

Via MedGadget

From Microsoft:

"In this project—which is being shown during the DemoFest portion of Faculty Summit 2013, which brings more than 400 academic researchers to Microsoft headquarters to share insight into impactful research—the hand tracking leads to a process of 3-D motion-trajectory alignment and matching for individual words in sign language. The words are generated via hand tracking by theKinect for Windows software and then normalized, and matching scores are computed to identify the most relevant candidates when a signed word is analyzed.

The algorithm for this 3-D trajectory matching, in turn, has enabled the construction of a system for sign-language recognition and translation, consisting of two modes. The first, Translation Mode, translates sign language into text or speech. The technology currently supports American sign language but has potential for all varieties of sign language."

sign language translator Kinect Based Automatic Translator/Interpreter for Sign Language

 

Jul 18, 2013

How to see with your ears

Via: KurzweilAI.net

A participant wearing camera glasses and listening to the soundscape (credit: Alastair Haigh/Frontiers in Psychology)

A device that trains the brain to turn sounds into images could be used as an alternative to invasive treatment for blind and partially-sighted people, researchers at the University of Bath have found.

“The vOICe” is a visual-to-auditory sensory substitution device that encodes images taken by a camera worn by the user into “soundscapes” from which experienced users can extract information about their surroundings.

It helps blind people use sounds to build an image in their minds of the things around them.

A research team, led by Dr Michael Proulx, from the University’s Department of Psychology, looked at how blindfolded sighted participants would do on an eye test using the device.

Read full story