By continuing your visit to this site, you accept the use of cookies. These ensure the smooth running of our services. Learn more.

Oct 16, 2014

TIME: Fear, Misinformation, and Social Media Complicate Ebola Fight


Based on Facebook and Twitter chatter, it can seem like Ebola is everywhere. Following the first diagnosis of an Ebola case in the United States on Sept. 30, mentions of the virus on Twitter leapt from about 100 per minute to more than 6,000. Cautious health officials have tested potential cases in Newark, Miami Beach and Washington D.C., sparking more worry. Though the patients all tested negative, some people are still tweeting as if the disease is running rampant in these cities. In Iowa the Department of Public Health was forced to issue a statement dispelling social media rumors that Ebola had arrived in the state. Meanwhile there have been a constant stream of posts saying that Ebola can be spread through the air, water, or food, which are all inaccurate claims.


Research scientists who study how we communicate on social networks have a name for these people: the “infected.”

Read full story

Oct 12, 2014

MIT Robotic Cheetah

MIT researchers have developed an algorithm for bounding that they've successfully implemented in a robotic cheetah. (Learn more: http://mitsha.re/1uHoltW)

I am not that impressed by the result though.

22:20 Posted in AI & robotics | Permalink | Comments (0)

New Material May Help Us To Breathe Underwater

Scientists in Denmark announced they have developed a substance that absorbs, stores and releases huge amounts of oxygen.

The substance is so effective that just a few grains are capable of storing enough oxygen for a single human breath while a bucket full of the new material could capture an entire room of O2.

With the new material there are hopes those requiring medical oxygen might soon be freed from carrying bulky tanks, while SCUBA divers might also be able to use the material to absorb oxygen from water, allowing them to stay submerged for significantly longer.

The substance was developed by tinkering with the molecular structure of cobalt, a chemical element that when found in meteoric iron, resembles a silver-gray metal.
Read More: University of Southern Denmark

New Scientist on new virtual reality headset Oculus Rift

From New Scientist

The latest prototype of virtual reality headset Oculus Rift allows you to step right into the movies you watch and the games you play

AN OLD man sits across the fire from me, telling a story. An inky dome of star-flecked sky arcs overhead as his words mingle with the crackling of the flames. I am entranced.

This isn't really happening, but it feels as if it is. This is a program called Storyteller – Fireside Tales that runs on the latest version of the Oculus Rift headset, unveiled last month. The audiobook software harnesses the headset's virtual reality capabilities to deepen immersion in the story. (See also "Plot bots")

Fireside Tales is just one example of a new kind of entertainment that delivers convincing true-to-life experiences. Soon films will get a similar treatment.

Movie company 8i, based in Wellington, New Zealand, plans to make films specifically for Oculus Rift. These will be more immersive than just mimicking a real screen in virtual reality because viewers will be able to step inside and explore the movie while they are watching it.

"We are able to synthesise photorealistic views in real-time from positions and directions that were not directly captured," says Eugene d'Eon, chief scientist at 8i. "[Viewers] can not only look around a recorded experience, but also walk or fly. You can re-watch something you love from many different perspectives."

The latest generation of games for Oculus Rift are more innovative, too. Black Hat Oculus is a two-player, cooperative game designed by Mark Sullivan and Adalberto Garza, both graduates of MIT's Game Lab. One headset is for the spy, sneaking through guarded buildings on missions where detection means death. The other player is the overseer, with a God-like view of the world, warning the spy of hidden traps, guards and passageways.

Deep immersion is now possible because the latest Oculus Rift prototype – known as Crescent Bay – finally delivers full positional tracking. This means the images that you see in the headset move in sync with your own movements.

This is the key to unlocking the potential of virtual reality, says Hannes Kaufmann at the Technical University of Vienna in Austria. The headset's high-definition display and wraparound field of view are nice additions, he says, but they aren't essential.

The next step, says Kaufmann is to allow people to see their own virtual limbs, not just empty space, in the places where their brain expects them to be. That's why Beijing-based motion capture company Perception, which raised more than $500,000 on Kickstarter in September, is working on a full body suit that gathers body pose estimation and gives haptic feedback – a sense of touch – to the wearer. Software like Fireside Tales will then be able to take your body position into account.

In the future, humans will be able to direct live virtual experiences themselves, says Kaufmann. "Imagine you're meeting an alien in the virtual reality, and you want to shake hands. You could have a real person go in there and shake hands with you, but for you only the alien is present."

Oculus, which was bought by Facebook in July for $2 billion, has not yet announced when the headset will be available to buy.

This article appeared in print under the headline "Deep and meaningful"

Oct 06, 2014

Is the metaverse still alive?

In the last decade, online virtual worlds such as Second Life and alike have become enormously popular. Since their appearance on the technology landscape, many analysts regarded shared 3D virtual spaces as a disruptive innovation, which would have rendered the Web itself obsolete.

This high expectation attracted significant investments from large corporations such as IBM, which started building their virtual spaces and offices in the metaverse. Then, when it became clear that these promises would not be kept, disillusionment set in and virtual worlds started losing their edge. However, this is not a new phenomenon in high-tech, happening over and over again.

The US consulting company Gartner has developed a very popular model to describe this effect, called the “Hype Cycle”. The Hype Cycle provides a graphic representation of the maturity and adoption of technologies and applications.

It consists of five phases, which show how emerging technologies will evolve.

In the first, “technology trigger” phase, a new technology is launched which attracts the interest of media. This is followed by the “peak of inflated expectations”, characterized by a proliferation of positive articles and comments, which generate overexpectations among users and stakeholders.

In the next, “trough of disillusionment” phase, these exaggerated expectations are not fulfilled, resulting in a growing number of negative comments generally followed by a progressive indifference.

In the “slope of enlightenment” the technology potential for further applications becomes more broadly understood and an increasing number of companies start using it.

In the final, “plateau of productivity” stage, the emerging technology established itself as an effective tool and mainstream adoption takes off. 

So what stage in the hype cycle are virtual worlds now?

After the 2006-2007 peak, metaverses entered the downward phase of the hype cycle, progressively loosing media interest, investments and users. Many high-tech analysts still consider this decline an irreversible process.

However, the negative outlook that headed shared virtual worlds into the trough of disillusionment maybe soon reversed. This is thanks to the new interest in virtual reality raised by the Oculus Rift (recently acquired by Facebook for $2 billion), Sony’s Project Morpheus and alike immersive displays, which are still at the takeoff stage in the hype cycle.

Oculus Rift's chief scientist Michael Abrash makes no mystery of the fact that his main ambition has always been to build a metaverse such the one described in Neal Stephenson's (1992) cyberpunk novel Snow Crash. As he writes on the Oculus blog

"Sometime in 1993 or 1994, I read Snow Crash and for the first time thought something like the Metaverse might be possible in my lifetime."

Furthermore, despite the negative comments and deluded expectations, the metaverse keeps attracting new users: in its 10th anniversary on June 23rd 2013, an infographic reported that Second Life had over 1 million users visit around the world monthly, more than 400,000 new accounts per month, and 36 million registered users.

So will Michael Abrash’s metaverse dream come true? Even if one looks into the crystal ball of the hype cycle, the answer is not easily found.

Bionic Vision Australia’s Bionic Eye Gives New Sight to People Blinded by Retinitis Pigmentosa

Via Medgadget

Bionic Vision Australia, a collaboration between of researchers working on a bionic eye, has announced that its prototype implant has completed a two year trial in patients with advanced retinitis pigmentosa. Three patients with profound vision loss received 24-channel suprachoroidal electrode implants that caused no noticeable serious side effects. Moreover, though this was not formally part of the study, the patients were able to see more light and able to distinguish shapes that were invisible to them prior to implantation. The newly gained vision allowed them to improve how they navigated around objects and how well they were able to spot items on a tabletop.

The next step is to try out the latest 44-channel device in a clinical trial slated for next year and then move on to a 98-channel system that is currently in development.

This study is critically important to the continuation of our research efforts and the results exceeded all our expectations,” Professor Mark Hargreaves, Chair of the BVA board, said in a statement. “We have demonstrated clearly that our suprachoroidal implants are safe to insert surgically and cause no adverse events once in place. Significantly, we have also been able to observe that our device prototype was able to evoke meaningful visual perception in patients with profound visual loss.”

Here’s one of the study participants using the bionic eye:

First direct brain-to-brain communication between human subjects

Via KurzweilAI.net

An international team of neuroscientists and robotics engineers have demonstrated the first direct remote brain-to-brain communication between two humans located 5,000 miles away from each other and communicating via the Internet, as reported in a paper recently published in PLOS ONE (open access).

Emitter and receiver subjects with non-invasive devices supporting, respectively, a brain-computer interface (BCI), based on EEG changes, driven by motor imagery (left) and a computer-brain interface (CBI) based on the reception of phosphenes elicited by neuro-navigated TMS (right) (credit: Carles Grau et al./PLoS ONE)

In India, researchers encoded two words (“hola” and “ciao”) as binary strings and presented them as a series of cues on a computer monitor. They recorded the subject’s EEG signals as the subject was instructed to think about moving his feet (binary 0) or hands (binary 1). They then sent the recorded series of binary values in an email message to researchers in France, 5,000 miles away.

There, the binary strings were converted into a series of transcranial magnetic stimulation (TMS) pulses applied to a hotspot location in the right visual occipital cortex that either produced a phosphene (perceived flash of light) or not.

“We wanted to find out if one could communicate directly between two people by reading out the brain activity from one person and injecting brain activity into the second person, and do so across great physical distances by leveraging existing communication pathways,” explains coauthor Alvaro Pascual-Leone, MD, PhD, Director of the Berenson-Allen Center for Noninvasive Brain Stimulation at Beth Israel Deaconess Medical Center (BIDMC) and Professor of Neurology at Harvard Medical School.

 A team of researchers from Starlab Barcelona, Spain and Axilum Robotics, Strasbourg, France conducted the experiment. A second similar experiment was conducted between individuals in Spain and France.

“We believe these experiments represent an important first step in exploring the feasibility of complementing or bypassing traditional language-based or other motor/PNS mediated means in interpersonal communication,” the researchers say in the paper.

“Although certainly limited in nature (e.g., the bit rates achieved in our experiments were modest even by current BCI (brain-computer interface) standards, mostly due to the dynamics of the precise CBI (computer-brain interface) implementation, these initial results suggest new research directions, including the non-invasive direct transmission of emotions and feelings or the possibility of sense synthesis in humans — that is, the direct interface of arbitrary sensors with the human brain using brain stimulation, as previously demonstrated in animals with invasive methods.

Brain-to-brain (B2B) communication system overview. On the left, the BCI subsystem is shown schematically, including electrodes over the motor cortex and the EEG amplifier/transmitter wireless box in the cap. Motor imagery of the feet codes the bit value 0, of the hands codes bit value 1. On the right, the CBI system is illustrated, highlighting the role of coil orientation for encoding the two bit values. Communication between the BCI and CBI components is mediated by the Internet. (Credit: Carles Grau et al./PLoS ONE)

“The proposed technology could be extended to support a bi-directional dialogue between two or more mind/brains (namely, by the integration of EEG and TMS systems in each subject). In addition, we speculate that future research could explore the use of closed mind-loops in which information associated to voluntary activity from a brain area or network is captured and, after adequate external processing, used to control other brain elements in the same subject. This approach could lead to conscious synthetically mediated modulation of phenomena best detected subjectively by the subject, including emotions, pain and psychotic, depressive or obsessive-compulsive thoughts.

“Finally, we anticipate that computers in the not-so-distant future will interact directly with the human brain in a fluent manner, supporting both computer- and brain-to-brain communication routinely. The widespread use of human brain-to-brain technologically mediated communication will create novel possibilities for human interrelation with broad social implications that will require new ethical and legislative responses.”

This work was partly supported by the EU FP7 FET Open HIVE project, the Starlab Kolmogorov project, and the Neurology Department of the Hospital de Bellvitge.


Google Glass can now display captions for hard-of-hearing users

Georgia Institute of Technology researchers have created a speech-to-text Android app for Google Glass that displays captions for hard-of-hearing persons when someone is talking to them in person.

“This system allows wearers like me to focus on the speaker’s lips and facial gestures, “said School of Interactive Computing Professor Jim Foley.

“If hard-of-hearing people understand the speech, the conversation can continue immediately without waiting for the caption. However, if I miss a word, I can glance at the transcription, get the word or two I need and get back into the conversation.”

Captioning on Glass display captions for the hard-of-hearing (credit: Georgia Tech)

The “Captioning on Glass” app is now available to install from MyGlass. More information here.

Foley and the students are working with the Association of Late Deafened Adults in Atlanta to improve the program. An iPhone app is planned.

Sep 25, 2014

With Cyberith's Virtualizer, you can run around wearing an Oculus Rift

Sep 21, 2014

First hands-on: Crescent Bay demo

I just tested the Oculus Crescent Bay prototype at the Oculus Connect event in LA.

I still can't close my mouth.

The demo lasted about 10 min, during which several scenes were presented. The resolution and framerate are astounding, you can turn completely around. I can say this is the first time in my life I can really say I was there.

I believe this is really the begin of a new era for VR, and I am sure I won't sleep tonight thinking about the infinite possibilities and applications of this technology. and I don't think I am exaggerating - if anything, I am underestimating



Aug 31, 2014

Information Entropy

Information – Entropy by Oliver Reichenstein

Will information technology affect our minds the same way the environment was affected by our analogue technology? Designers hold a key position in dealing with ever increasing data pollution. We are mostly focussed on speeding things up, on making sharing easier, faster, more accessible. But speed, usability, accessibility are not the main issue anymore.  The main issues are not technological, they are structural, processual. What we lack is clarity, correctness, depth, time. Are there counter-techniques we can employ to turn data into information, information into knowledge, knowledge into wisdom?

Oliver Reichenstein — Information Entropy (SmashingConf NYC 2014) from Smashing Magazine on Vimeo.

Self-regulation of human brain activity using simultaneous real-time fMRI and EEG neurofeedback

Self-regulation of human brain activity using simultaneous real-time fMRI and EEG neurofeedback.

Zotev V1,Phillips R, Yuan H, Misaki M, Bodurka J. Neuroimage. 2014 Jan 15;85 Pt 3:985-95. doi: 10.1016/j.neuroimage.2013.04.126. Epub 2013 May 11.

Abstract. Neurofeedback is a promising approach for non-invasive modulation of human brain activity with applications for treatment of mental disorders and enhancement of brain performance. Neurofeedback techniques are commonly based on either electroencephalography (EEG) or real-time functional magnetic resonance imaging (rtfMRI). Advances in simultaneous EEG-fMRI have made it possible to combine the two approaches. Here we report the first implementation of simultaneous multimodal rtfMRI and EEG neurofeedback (rtfMRI-EEG-nf). It is based on a novel system for real-time integration of simultaneous rtfMRI and EEG data streams. We applied the rtfMRI-EEG-nf to training of emotional self-regulation in healthy subjects performing a positive emotion induction task based on retrieval of happy autobiographical memories. The participants were able to simultaneously regulate their BOLD fMRI activation in the left amygdala and frontal EEG power asymmetry in the high-beta band using the rtfMRI-EEG-nf. Our proof-of-concept results demonstrate the feasibility of simultaneous self-regulation of both hemodynamic (rtfMRI) and electrophysiological (EEG) activities of the human brain. They suggest potential applications of rtfMRI-EEG-nf in the development of novel cognitive neuroscience research paradigms and enhanced cognitive therapeutic approaches for major neuropsychiatric disorders, particularly depression.

Biofeedback-based training for stress management in daily hassles: an intervention study.

Biofeedback-based training for stress management in daily hassles: an intervention study.

Brain Behav. 2014 Jul;4(4):566-579

Authors: Kotozaki Y, Takeuchi H, Sekiguchi A, Yamamoto Y, Shinada T, Araki T, Takahashi K, Taki Y, Ogino T, Kiguchi M, Kawashima R

Abstract. BACKGROUND: The day-to-day causes of stress are called daily hassles. Daily hassles are correlated with ill health. Biofeedback (BF) is one of the tools used for acquiring stress-coping skills. However, the anatomical correlates of the effects of BF with long training periods remain unclear. In this study, we aimed to investigate this. METHODS: PARTICIPANTS WERE ASSIGNED RANDOMLY TO TWO GROUPS: the intervention group and the control group. Participants in the intervention group performed a biofeedback training (BFT) task (a combination task for heart rate and cerebral blood flow control) every day, for about 5 min once a day. The study outcomes included MRI, psychological tests (e.g., Positive and Negative Affect Schedule, Center for Epidemiologic Studies Depression Scale, and Brief Job Stress Questionnaire), and a stress marker (salivary cortisol levels) before (day 0) and after (day 28) the intervention. RESULTS: We observed significant improvements in the psychological test scores and salivary cortisol levels in the intervention group compared to the control group. Furthermore, voxel-based morphometric analysis revealed that compared to the control group, the intervention group had significantly increased regional gray matter (GM) volume in the right lateral orbitofrontal cortex, which is an anatomical cluster that includes mainly the left hippocampus, and the left subgenual anterior cingulate cortex. The GM regions are associated with the stress response, and, in general, these regions seem to be the most sensitive to the detrimental effects of stress. CONCLUSIONS: Our findings suggest that our BFT is effective against the GM structures vulnerable to stress.

Improving memory with transcranial magnetic stimulation

A new Northwestern Medicine study reports stimulating a particular region in the brain via non-invasive delivery of electrical current using magnetic pulses, called Transcranial Magnetic Stimulation, improves memory.

Aug 05, 2014

Life's a beach at work for Japanese company

A Japanese company has recreated a tropical beach in the very reception area they also use as their employee meeting space and staff lounge.

Aug 03, 2014

JIBO: The World's First Family Robot

Jibo is a new robot from MIT roboticist Cynthia Breazeal. It is designed to be a social robot that you interact with like it’s another person in your home. The 28-centimetre, 3-kilogram “sociable robot” snaps family photos, handles video calling and acts as a digital concierge. Connected wirelessly to the Internet, Jibo sifts through messages, organizes your itinerary and orders takeout.

What people say about Jibo:

"JIBO's potential extends far beyond engaging in casual conversation and completing daily tasks." - Katie Couric, Yahoo News

"A Robot with a Little Humanity" - John Markoff, New York Times

"JIBO isn't an appliance, it's a companion, one that can interact and react with its human owners in ways that delight instead of disturb." - Lance Ulanoff, Mashable

"Move over, Siri, the JIBO robot is coming" - Maggie Lake, CNN

"This Friendly Robot Could One Day Be Your Family's Personal Assistant" - Christina Bonnington, WIRED

22:51 Posted in AI & robotics | Permalink | Comments (0)

Detecting awareness in patients with disorders of consciousness using a hybrid brain-computer interface

Detecting awareness in patients with disorders of consciousness using a hybrid brain-computer interface.

J Neural Eng. 2014 Aug 1;11(5):056007

Authors: Pan J, Xie Q, He Y, Wang F, Di H, Laureys S, Yu R, Li Y

Abstract. Objective. The bedside detection of potential awareness in patients with disorders of consciousness (DOC) currently relies only on behavioral observations and tests; however, the misdiagnosis rates in this patient group are historically relatively high. In this study, we proposed a visual hybrid brain-computer interface (BCI) combining P300 and steady-state evoked potential (SSVEP) responses to detect awareness in severely brain injured patients. Approach. Four healthy subjects, seven DOC patients who were in a vegetative state (VS, n = 4) or minimally conscious state (MCS, n = 3), and one locked-in syndrome (LIS) patient attempted a command-following experiment. In each experimental trial, two photos were presented to each patient; one was the patient's own photo, and the other photo was unfamiliar. The patients were instructed to focus on their own or the unfamiliar photos. The BCI system determined which photo the patient focused on with both P300 and SSVEP detections. Main results. Four healthy subjects, one of the 4 VS, one of the 3 MCS, and the LIS patient were able to selectively attend to their own or the unfamiliar photos (classification accuracy, 66-100%). Two additional patients (one VS and one MCS) failed to attend the unfamiliar photo (50-52%) but achieved significant accuracies for their own photo (64-68%). All other patients failed to show any significant response to commands (46-55%). Significance. Through the hybrid BCI system, command following was detected in four healthy subjects, two of 7 DOC patients, and one LIS patient. We suggest that the hybrid BCI system could be used as a supportive bedside tool to detect awareness in patients with DOC.

Mindfulness-Based Stress Reduction as a Stress Management Intervention for Healthy Individuals: A Systematic Review

Mindfulness-Based Stress Reduction as a Stress Management Intervention for Healthy Individuals: A Systematic Review.

J Evid Based Complementary Altern Med. 2014 Jul 22; Authors: Sharma M, Rush SE

Stress is a global public health problem with several negative health consequences, including anxiety, depression, cardiovascular disease, and suicide. Mindfulness-based stress reduction offers an effective way of reducing stress by combining mindfulness meditation and yoga in an 8-week training program. The purpose of this study was to look at studies from January 2009 to January 2014 and examine whether mindfulness-based stress reduction is a potentially viable method for managing stress. A systematic search from Medline, CINAHL, and Alt HealthWatch databases was conducted for all types of quantitative articles involving mindfulness-based stress reduction. A total of 17 articles met the inclusion criteria. Of the 17 studies, 16 demonstrated positive changes in psychological or physiological outcomes related to anxiety and/or stress. Despite the limitations of not all studies using randomized controlled design, having smaller sample sizes, and having different outcomes, mindfulness-based stress reduction appears to be a promising modality for stress management.

Modulation of functional network with real-time fMRI feedback training of right premotor cortex activity

Modulation of functional network with real-time fMRI feedback training of right premotor cortex activity.

Neuropsychologia. 2014 Jul 21;

Authors: Hui M, Zhang H, Ge R, Yao L, Long Z

Abstract. Although the neurofeedback of real-time fMRI can reportedly enable people to gain control of the activity in the premotor cortex (PMA) during motor imagery, it is unclear how the neurofeedback training of PMA affect the motor network engaged in the motor execution (ME) and imagery (MI) task. In this study, we investigated the changes in the motor network engaged in both ME and MI task induced by real-time neurofeedback training of the right PMA. The neurofeedback training induced changes in activity of the ME-related motor network as well as alterations in the functional connectivity of both the ME-related and MI-related motor networks. Especially, the percent signal change of the right PMA in the last training run was found to be significantly correlated with the connectivity between the right PMA and the left posterior parietal lobe (PPL) during the pre-training MI run, post-training MI run and the last training run. Moreover, the increase in the tapping frequency was significantly correlated with the increase of connectivity between the right cerebellum and the primary motor area / primary sensory area (M1/S1) of the ME-related motor network after neurofeedback training. These findings show the importance of the connectivity between the right PMA and left PPL of the MI network for the up-regulation of the right PMA as well as the critical role of connectivity between the right cerebellum and M1/S1 of the ME network in improving the behavioral performance.

Fly like a Birdly

Birdly is a full body, fully immersive, Virtual Reality flight simulator developed at the Zurich University of the Arts (ZHdK). With Birdly, you can embody an avian creature, the Red Kite, visualized through Oculus Rift, as it soars over the 3D virtual city of San Francisco, heightened by sonic, olfactory, and wind feedback.