Ok

By continuing your visit to this site, you accept the use of cookies. These ensure the smooth running of our services. Learn more.

Apr 05, 2015

Are you concerned about AI?

Recently, a growing number of opinion leaders  have started to point out the potential risks associated to the rapid advancement of Artificial Intelligence. This shared concern has led an interdisciplinary group of scientists, technologists and entrepreneurs to sign an open letter (http://futureoflife.org/misc/open_letter/), drafted by the Future of Life Institute, which focuses on priorities to be considered as Artificial Intelligence develops as well as on the potential dangers posed by this paradigm.

The concern that machines may soon dominate humans, however, is not new: in the last thirty years, this topic has been widely represented in movies (i.e. Terminator, the Matrix), novels and various interactive arts. For example, australian-based performance artist Stelarc has incorporated themes of cyborgization and other human-machine interfaces in his work, by creating a number of installations that confront us with the question of where human ends and technology begins.

Risultati immagini per stelarc

In his 2005 well-received book “The Singularity Is Near: When Humans Transcend Biology” (Viking Penguin: New York), inventor and futurist Ray Kurzweil argued that Artificial Intelligence is one of the interacting forces that, together with genetics, robotic and nanotechnology, may soon converge to overcome our biological limitations and usher in the beginning of the Singularity, during which Kurzweil predicts that human life will be irreversibly transformed. According to Kurzweil, will take place around 2045 and will probably represent the most extraordinary event in all of human history.

Ray Kurzweil’s vision of the future of intelligence is at the forefront of the transhumanist movement, which considers scientific and technological advances as a mean to augment human physical and cognitive abilities, with the final aim of improving and even extending life. According to transhumanists, however, the choice whether to benefit from such enhancement options should generally reside with the individual. The concept of transhumanism has been criticized, among others, by the influential american philosopher of technology, Don Ihde, who pointed out that no technology will ever be completely internalized, since any technological enhancement implies a compromise. Ihde has distinguished four different relations that humans can have with technological artifacts. In particular, in the “embodiment relation” a technology becomes (quasi)transparent, allowing a partial symbiosis of ourself and the technology. In wearing of eyeglasses, as Ihde examplifies, I do not look “at” them but “through” them at the world: they are already assimilated into my body schema, withdrawing from my perceiving.

According to Ihde, there is a doubled desire which arises from such embodiment relations: “It is the doubled desire that, on one side, is a wish for total transparency, total embodiment, for the technology to truly "become me."(...) But that is only one side of the desire. The other side is the desire to have the power, the transformation that the technology makes available. Only by using the technology is my bodily power enhanced and magnified by speed, through distance, or by any of the other ways in which technologies change my capacities. (…) The desire is, at best, contradictory. l want the transformation that the technology allows, but I want it in such a way that I am basically unaware of its presence. I want it in such a way that it becomes me. Such a desire both secretly rejects what technologies are and overlooks the transformational effects which are necessarily tied to human-technology relations. This lllusory desire belongs equally to pro- and anti-technology interpretations of technology.” (Ihde, D. (1990). Technology and the Lifeworld: From Garden to Earth. Bloomington: Indiana, p. 75). 

Despite the different philosophical stances and assumptions on what our future relationship with technology will look like, there is little doubt that these questions will become more pressing and acute in the next years. In my personal view, technology should not be viewed as mean to replace human life, but as an instrument for improving it. As William S. Haney II suggests in his book “Cyberculture, Cyborgs and Science Fiction: Consciousness and the Posthuman” (Rodopi: Amsterdam, 2006), “each person must choose for him or herself between the technological extension of physical experience through mind, body and world on the one hand, and the natural powers of human consciousness on the other as a means to realize their ultimate vision.” (ix, Preface).

From the Experience Economy to the Transformation Economy

In 1999, Joseph Pine and James Gilmore wrote a seminal book titled “The Experience Economy” (Harvard Business School Press, Boston, MA) that theorized the shift from a service-based economy to an experience-based economy. According to these authors, in the new experience economy the goal of the purchase is no longer to own a product (be it a good or service), but to use it in order to enjoy a compelling experience. An experience, thus, is a whole-new type of offer: in contrast to commodities, goods and services, it is designed to be as personal and memorable as possible. Just as in a theatrical representation, companies stage meaningful events to engage customers in a memorable and personal way, by offering activities that provide engaging and rewarding experiences.

Indeed, if one looks back at the past ten years, the concept of experience has become more central to several fields, including tourism, architecture, and – perhaps more relevant for this column – to human-computer interaction, with the rise of “User Experience” (UX). The concept of UX was introduced by Donald Norman in a 1995 article published on the CHI proceedings (D. Norman, J. Miller, A. Henderson: What You See, Some of What's in the Future, And How We Go About Doing It: HI at Apple Computer. Proceedings of CHI 1995, Denver, Colorado, USA).

Norman argued that focusing exclusively on usability attribute (i.e. easy of use, efficacy, effectiveness) when designing an interactive product is not enough; one should take into account the whole experience of the user with the system, including users’ emotional and contextual needs. Since then, the UX concept has assumed an increasing importance in HCI.

As McCarthy and Wright emphasized in their book “Technology as Experience” (MIT Press, 2004): “In order to do justice to the wide range of influences that technology has in our lives, we should try to interpret the relationship between people and technology in terms of the felt life and the felt or emotional quality of action and interaction.” (p. 12).

However, according to Pine and Gilmore experience may not be the last step of what they call as “Progression of Economic Value”. They speculated further into the future, by identifying the “Transformation Economy” as the likely next phase. In their view, while experiences are essentially memorable events which stimulate the sensorial and emotional levels, transformations go much further in that they are the result of a series of experiences staged by companies to guide customers learning, taking action and eventually achieving their aspirations and goals.

In Pine and Gilmore terms, an aspirant is the individual who seeks advice for personal change  (i.e. a better figure, a new career, and so forth), while the provider of this change (a dietist, a university) is an elictor. The elictor guide the aspirant through a series of experiences which are designed with certain purpose and goals. According to Pine and Gilmore, the main difference between an experience and a trasformation is that the latter occurs when an experience is customized: “When you customize an experience to make it just right for an individual - providing exactly what he needs right now - you cannot help changing that individual. When you customize an experience, you automatically turn it into a transformation, which companies create on top of experiences (recall that phrase: “a life-transforming experience”), just as they create experiences on top of services and so forth” (p. 244).

A further key difference between experiences and transformations concerns their effects: because an experience is inherently personal, no two people can have the same one. Likewise, no individual can undergo the same transformation twice: the second time it’s attempted, the individual would no longer be the same person (p. 254-255). But what will be the impact of this upcoming, “transformation economy” on how people relate with technology? If in the experience economy the buzzword is “User Experience”, in the next stage the new buzzword might be “User Transformation”.

Indeed, we can see some initial signs of this shift. For example, FitBit and similar self-tracking gadgets are starting to offer personalized advices to foster enduring changes in users’ lifestyle; another example is from the fields of ambient intelligence and domotics, where there is an increasing focus towards designing systems that are able to learn from the user’s behaviour (i.e. by tracking the movement of an elderly in his home) to provide context-aware adaptive services (i.e. sending an alert when the user is at risk of falling). But likely, the most important ICT step towards the transformation economy could take place with the introduction of next-generation immersive virtual reality systems. Since these new systems are based on mobile devices (an example is the recent partnership between Oculus and Samsung), they are able to deliver VR experiences that incorporate information on the external/internal context of the user (i.e. time, location, temperature, mood etc) by using the sensors incapsulated in the mobile phone.

By personalizing the immersive experience with context-based information, it will be possibile to induce higher levels of involvement and presence in the virtual environment. In case of cyber-therapeutic applications, this could translate into the development of more effective, transformative virtual healing experiences.

LED therapy for neurorehabilitation

A staffer in Dr. Margaret Naeser’s lab demonstrates the equipment built especially for the research: an LED helmet (Photomedex), intranasal diodes (Vielight), and LED cluster heads placed on the ears (MedX Health). The real and sham devices look identical. Goggles are worn to block out the red light to avoid experimental artifacts. The near-infrared light is beyond the visible spectrum and cannot be seen. (credit: Naeser lab)

Researchers at the VA Boston Healthcare System are testing the effects of light therapy on brain function in the Veterans with Gulf War Illness study.

Veterans in the study wear a helmet lined with light-emitting diodes that apply red and near-infrared light to the scalp. They also have diodes placed in their nostrils, to deliver photons to the deeper parts of the brain.

The light is painless and generates no heat. A treatment takes about 30 minutes.

The therapy, though still considered “investigational” and not covered by most health insurance plans, is already used by some alternative medicine practitioners to treat wounds and pain.

The light from the diodes has been shown to boost the output of nitric oxide near where the LEDs are placed, which improves blood flow in that location.

“We are applying a technology that’s been around for a while,” says lead investigator Dr. Margaret Naeser, “but it’s always been used on the body, for wound healing and to treat muscle aches and pains, and joint problems. We’re starting to use it on the brain.”

Naeser is a research linguist and speech pathologist for the Boston VA, and a research professor of neurology at Boston University School of Medicine (BUSM).

How LED therapy works

The LED therapy increases blood flow in the brain, as shown on MRI scans. It also appears to have an effect on damaged brain cells, specifically on their mitochondria. These are bean-shaped subunits within the cell that put out energy in the form of a chemical known as ATP. The red (600 nm) and NIR (800–900nm) wavelengths penetrate through the scalp and skull by about 1 cm to reach brain cells and spur the mitochondria to produce more ATP. That can mean clearer, sharper thinking, says Naeser.

Nitric oxide is also released and diffused outside the cell wall, promoting local vasodilation and increased blood flow.

Naeser says brain damage caused by explosions, or exposure to pesticides or other neurotoxins — such as in the Gulf War — could impair the mitochondria in cells. She believes light therapy can be a valuable adjunct to standard cognitive rehabilitation, which typically involves “exercising” the brain in various ways to take advantage of brain plasticity and forge new neural networks.

“The light-emitting diodes add something beyond what’s currently available with cognitive rehabilitation therapy,” says Naeser. “That’s a very important therapy, but patients can go only so far with it. And in fact, most of the traumatic brain injury and PTSD cases that we’ve helped so far with LEDs on the head have been through cognitive rehabilitation therapy. These people still showed additional progress after the LED treatments. It’s likely a combination of both methods would produce the best results.”

Results published from 11 TBI patients

The LED approach has its skeptics, but Naeser’s group has already published some encouraging results in the peer-reviewed scientific literature.

Last June in the Journal of Neurotrauma, they reported in an open-access paper, the outcomes of LED therapy in 11 patients with chronic TBI, ranging in age from 26 to 62. Most of the injuries occurred in car accidents or on the athletic field. One was a battlefield injury, from an improvised explosive device (IED).

Neuropsychological testing before the therapy and at several points thereafter showed gains in areas such as executive function, verbal learning, and memory. The study volunteers also reported better sleep and fewer PTSD symptoms.

The study authors concluded that the pilot results warranted a randomized, placebo-controlled trial — the gold standard in medical research.

That’s happening now, thanks to VA support. One trial, already underway, aims to enroll 160 Gulf War veterans. Half the veterans will get the real LED therapy for 15 sessions, while the others will get a mock version, using sham lights.

Then the groups will switch, so all the volunteers will end up getting the real therapy, although they won’t know at which point they received it. After each Veteran’s last real or sham treatment, he or she will undergo tests of brain function.

Naeser points out that “because this is a blinded, controlled study, neither the participant nor the assistant applying the LED helmet and the intranasal diodes is aware whether the LEDs are real or sham — they both wear goggles that block out the red LED light.” The near-infrared light is invisible.

Upcoming trials 

Other trials of the LED therapy are getting underway:

  • Later this year, a trial will launch for Veterans age 18 to 55 who have both traumatic brain injury (TBI) and post-traumatic stress disorder, a common combination in recent war Veterans. The VA-funded study will be led by Naeser’s colleague Dr. Jeffrey Knight, a psychologist with VA’s National Center for PTSD and an assistant professor of psychiatry at BUSM.
  • Dr. Yelena Bogdanova, a clinical psychologist with VA and assistant professor of psychiatry at BUSM, will lead a VA-funded trial looking at the impact of LED therapy on sleep and cognition in Veterans with blast TBI.
  • Naeser is collaborating on an Army study testing LED therapy, delivered via the helmets and the nose diodes, for active-duty soldiers with blast TBI. The study, funded by the Army’s Advanced Medical Technology Initiative, will also test the feasibility and effectiveness of using only the nasal LED devices — and not the helmets — as an at-home, self-administered treatment. The study leader is Dr. Carole Palumbo, an investigator with VA and the Army Research Institute of Environmental Medicine, and an associate professor of neurology at BUSM.

Naeser hopes the work will validate LED therapy as a viable treatment for veterans and others with brain difficulties. She also foresees potential for conditions such as depression, stroke, dementia, and even autism.

According to sources cited by the authors, i is estimated that there are 5,300,000 Americans living with TBI-related disabilities. The annual economic cost is estimated to be between $60 and $76.5 billion. It is estimated that 15–40% of soldiers returning from Iraq and Afghanistan as part of Operation Enduring Freedom/Operation Iraqi Freedom (OEF/OIF) report at least one TBI. And within the past 10 years, the diagnosis of concussion in high school sports has increased annually by 16.5%.

The research was supported by U.S. Department of Veterans Affairs. National Institutes of Health, American Medical Society for Sports Medicine, and American College of Sports Medicine-American Medical Society for Sports Medicine Foundation.

Mar 19, 2015

EEG-Powered Glasses Turn Dark to Help Keep Focus

Via Medgadget

Keeping mental focus while working, studying, or driving can be a serious challenge. A new product looking for funding on Kickstarter may help users maintain focus and to train the brain to keep the mind from wandering. The Narbis device is a combination of an EEG sensor and a pair of glasses whose lenses can go from transparent to opaque.

The user puts on the glasses and adjusts the dry EEG electrodes to make contact with the skull. The EEG component continuously monitors brainwave activity, noticing when the user starts to drift off mentally. When that happens, the glasses fade to darkness and the wearer is effectively forced to snap back to attention. The EEG recognizes fresh activity within the brain, immediately clearing the glasses and letting the wearer get back to task.

The team behind the Narbis believes that a couple sessions per week of wearing the device can help improve mental focus even when not using the system. Here’s their promo looking to fund the manufacturing of the device on Kickstarter.

The neuroscience of mindfulness meditation

The neuroscience of mindfulness meditation.

Nat Rev Neurosci. 2015 Mar 18;

Authors: Tang YY, Hölzel BK, Posner MI

Abstract. Research over the past two decades broadly supports the claim that mindfulness meditation - practiced widely for the reduction of stress and promotion of health - exerts beneficial effects on physical and mental health, and cognitive performance. Recent neuroimaging studies have begun to uncover the brain areas and networks that mediate these positive effects. However, the underlying neural mechanisms remain unclear, and it is apparent that more methodologically rigorous studies are required if we are to gain a full understanding of the neuronal and molecular bases of the changes in the brain that accompany mindfulness meditation.

Ultrasound treats Alzheimer’s disease, restoring memory in mice

Scanning ultrasound treatment of Alzheimer’s disease in mouse model (credit: Gerhard Leinenga and Jürgen Götz/Science Translational Medicine)

University of Queensland researchers have discovered that non-invasive scanning ultrasound (SUS) technology* can be used to treat Alzheimer’s disease in mice and restore memory by breaking apart the neurotoxic Amyloid-β (Aβ) peptide plaques that result in memory loss and cognitive decline.

The method can temporarily open the blood-brain barrier (BBB), activating microglial cells that digest and remove the amyloid plaques that destroy brain synapses.

Treated AD mice displayed improved performance on three memory tasks: the Y-maze, the novel object recognition test, and the active place avoidance task.

The next step is to scale the research in higher animal models ahead of human clinical trials, which are at least two years away. In their paper in the journal Science Translational Medicine, the researchers note possible hurdles. For example, the human brain is much larger, and it’s also thicker than that of a mouse, which may require stronger energy that could cause tissue damage. And it will be necessary to avoid excessive immune activation.

The researchers also plan to see whether this method clears toxic protein aggregates in other neurodegenerative diseases and restores executive functions, including decision-making and motor control. It could also be used as a vehicle for drug or gene delivery, since the BBB remains the major obstacle for the uptake by brain tissue of therapeutic agents.

Previous research in treating Alzheimer’s with ultrasound used magnetic resonance imaging (MRI) to focus the ultrasonic energy to open the BBB for more effective delivery of drugs to the brain.

Jan 25, 2015

Blind mom is able to see her newborn baby son for the very first time using high-tech glasses

Kathy Beitz, 29, is legally blind - she lost her vision as a child and, for a long time, adapted to living in a world she couldn't see (Kathy has Stargardt disease, a condition that causes macular degeneration). Technology called eSight glasses allowed Kathy to see her son on the day he was born. The glasses cost $15,000 and work by capturing real-time video and enhancing it.

Dec 16, 2014

Neuroprosthetics

Neuroprosthetics is a relatively new discipline at the boundaries of neuroscience and biomedical engineering, which aims at developing implantable devices to restore neural function. The most popular and clinically successfull neuroprosthesis to date is the cochlear implant, a device that can restore hearing by stimulating directly the human auditory nerve, by bypassing damaged hair cells in the cochlea.

Visual prostheses, on the other hand, are still in a preliminary phase of development, although substantial progress has been made in the last few years. This kind of implantable devices are designed to micro-electrically stimulate nerves in the visual system, based on an image from an external camera. These impulses are then propagated to the visual cortex, which is able to process the information and generate a “pixelated” image. The resulting impression has not the same quality as natural vision but it is still useful for performing basic perceptual and motor tasks, such as identifying an object or navigating a room. An example of this approach is the Boston Retinal Implant Project, a large joint collaborative effort that includes, among others, the Harvard Medical School and MIT.

Another area of neuroprosthetics is concerned with the development of implantable devices to help patients with diseases such as spinal cord injury, limb loss, stroke and neuromuscolar disorders improving their ability to interact with their environment and communicate. These motor neuroprosthetics are also known as “brain computer interfaces” (BCI), which in essence are devices that decode brain signals representing motor intentions and convert these information into overt device control. This process allows the patient to perform different motor tasks, from writing a text on a virtual keyboard to driving a wheel chair or controlling a prosthetic limb. An impressive evolution of motor neuroprosthetic is the combination of BCI and robotics. For example, Leigh R. Hochberg and coll. (Nature 485, 372–375; 2012) have reported that using a robotic arm connected to a neural interface called “BrainGate” two people with long-standing paralysis could control the reaching and grasping actions, such as drinking from a bottle.

Cognitive neuroprosthetics is a further research direction of neuroprosthetics. A cognitive prosthesis is an implantable device which aims at restoring cognitive function to brain-injured individuals by performing the function of the damaged tissue. One of the world’s most advanced effort in this area is being lead by Theodore Berger, a biomedical engineer and neuroscientist at the University of Southern California in Los Angeles. Berger and his coll. are attempting to develop a microchip-based neural prosthesis for the hippocampus, a region of the brain responsible for long-term memory (IEEE Trans Neural Syst Rehabil Eng 20/2, 198–211; 2012). More specifically, the team is developing a biomimetic model of the hippocampal dynamics, which should serve as a neural prosthesis by allowing a bi-directional communication with other neural tissue that normally provides the inputs and outputs to/from a damaged hippocampal area.
 

Nov 01, 2014

SoftBank's humanoid robot lands job as Nescafe salesman

(Reuters)

Nestle SA will enlist a thousand humanoid robots to help sell its coffee makers at electronics stores across Japan, becoming the first corporate customer for the chatty, bug-eyed androids unveiled in June by tech conglomerate SoftBank Corp.

Nestle has maintained healthy growth in Japan while many of its big markets are slowing, crediting a tradition of trying out off-beat marketing tactics in what is a small but profitable territory for the world's biggest food group.

The waist-high robot, developed by a French company and manufactured in Taiwan, was touted by Japan's SoftBank as capable of learning and expressing human emotions, and of serving as a companion or guide in a country that faces chronic labor shortages.

Nestle said on Wednesday it would initially commission 20 of the robots, called Pepper, in December to interact with customers and promote its coffee machines. By the end of next year, the maker of Nescafe coffee and KitKat chocolate bars plans to have the robots working at 1,000 stores.

"We hope this new type of made-in-Japan customer service will take off around the world," Nestle Japan President Kohzoh Takaoka said in a statement.

Nestle did not say how much it was paying for Pepper, which SoftBank has said would retail for 198,000 yen ($1,830). The robot is already greeting customers at more than 70 SoftBank mobile phone stores in Japan.

Among Nestle's most successful Japan-only initiatives is the Nescafe Ambassador system, in which individuals stock coffee pods and collect money for them at their offices in exchange for free use of machines and other perks. Nestle wants half a million "ambassadors" by 2020 - nearly quadruple the number now - as it expands into museums, beauty salons and even temples.

The Japanese unit has also developed hundreds of KitKat flavors including wasabi and green tea, and this year rolled out a KitKat that can be baked into cookies.

The latest creation from Aldebaran, Pepper is the first robot designed to live with humans.

21:28 Posted in AI & robotics | Permalink | Comments (0)

Oct 18, 2014

New Technique Helps Diagnose Consciousness in Locked-in Patients

Via Medgadget

locked in detection New Technique Helps Diagnose Consciousness in Locked in Patients

Brain networks in two behaviourally-similar vegetative patients (left and middle), but one of whom imagined playing tennis (middle panel), alongside a healthy adult (right panel). Credit: Srivas Chennu

People locked into a vegetative state due to disease or injury are a major mystery for medical science. Some may be fully unconscious, while others remain aware of what’s going on around them but can’t speak or move to show it. Now scientists at Cambridge have reported in journal PLOS Computational Biology on a new technique that can help identify locked-in people that can still hear and retain their consciousness.

Some details from the study abstract:

We devised a novel topographical metric, termed modular span, which showed that the alpha network modules in patients were also spatially circumscribed, lacking the structured long-distance interactions commonly observed in the healthy controls. Importantly however, these differences between graph-theoretic metrics were partially reversed in delta and theta band networks, which were also significantly more similar to each other in patients than controls. Going further, we found that metrics of alpha network efficiency also correlated with the degree of behavioural awareness. Intriguingly, some patients in behaviourally unresponsive vegetative states who demonstrated evidence of covert awareness with functional neuroimaging stood out from this trend: they had alpha networks that were remarkably well preserved and similar to those observed in the controls. Taken together, our findings inform current understanding of disorders of consciousness by highlighting the distinctive brain networks that characterise them. In the significant minority of vegetative patients who follow commands in neuroimaging tests, they point to putative network mechanisms that could support cognitive function and consciousness despite profound behavioural impairment.

Study in PLOS Computational Biology: Spectral Signatures of Reorganised Brain Networks in Disorders of Consciousness

 

Oct 17, 2014

Leia Display System - promo video HD short version

www.leiadisplay.com
www.facebook.com/LeiaDisplaySystem

Oct 16, 2014

TIME: Fear, Misinformation, and Social Media Complicate Ebola Fight

From TIME

Based on Facebook and Twitter chatter, it can seem like Ebola is everywhere. Following the first diagnosis of an Ebola case in the United States on Sept. 30, mentions of the virus on Twitter leapt from about 100 per minute to more than 6,000. Cautious health officials have tested potential cases in Newark, Miami Beach and Washington D.C., sparking more worry. Though the patients all tested negative, some people are still tweeting as if the disease is running rampant in these cities. In Iowa the Department of Public Health was forced to issue a statement dispelling social media rumors that Ebola had arrived in the state. Meanwhile there have been a constant stream of posts saying that Ebola can be spread through the air, water, or food, which are all inaccurate claims.

 

Research scientists who study how we communicate on social networks have a name for these people: the “infected.”

Read full story

Oct 12, 2014

MIT Robotic Cheetah

MIT researchers have developed an algorithm for bounding that they've successfully implemented in a robotic cheetah. (Learn more: http://mitsha.re/1uHoltW)

I am not that impressed by the result though.

22:20 Posted in AI & robotics | Permalink | Comments (0)

New Material May Help Us To Breathe Underwater

Scientists in Denmark announced they have developed a substance that absorbs, stores and releases huge amounts of oxygen.

The substance is so effective that just a few grains are capable of storing enough oxygen for a single human breath while a bucket full of the new material could capture an entire room of O2.

With the new material there are hopes those requiring medical oxygen might soon be freed from carrying bulky tanks, while SCUBA divers might also be able to use the material to absorb oxygen from water, allowing them to stay submerged for significantly longer.

The substance was developed by tinkering with the molecular structure of cobalt, a chemical element that when found in meteoric iron, resembles a silver-gray metal.
Read More: University of Southern Denmark

New Scientist on new virtual reality headset Oculus Rift

From New Scientist

The latest prototype of virtual reality headset Oculus Rift allows you to step right into the movies you watch and the games you play

AN OLD man sits across the fire from me, telling a story. An inky dome of star-flecked sky arcs overhead as his words mingle with the crackling of the flames. I am entranced.

This isn't really happening, but it feels as if it is. This is a program called Storyteller – Fireside Tales that runs on the latest version of the Oculus Rift headset, unveiled last month. The audiobook software harnesses the headset's virtual reality capabilities to deepen immersion in the story. (See also "Plot bots")

Fireside Tales is just one example of a new kind of entertainment that delivers convincing true-to-life experiences. Soon films will get a similar treatment.

Movie company 8i, based in Wellington, New Zealand, plans to make films specifically for Oculus Rift. These will be more immersive than just mimicking a real screen in virtual reality because viewers will be able to step inside and explore the movie while they are watching it.

"We are able to synthesise photorealistic views in real-time from positions and directions that were not directly captured," says Eugene d'Eon, chief scientist at 8i. "[Viewers] can not only look around a recorded experience, but also walk or fly. You can re-watch something you love from many different perspectives."

The latest generation of games for Oculus Rift are more innovative, too. Black Hat Oculus is a two-player, cooperative game designed by Mark Sullivan and Adalberto Garza, both graduates of MIT's Game Lab. One headset is for the spy, sneaking through guarded buildings on missions where detection means death. The other player is the overseer, with a God-like view of the world, warning the spy of hidden traps, guards and passageways.

Deep immersion is now possible because the latest Oculus Rift prototype – known as Crescent Bay – finally delivers full positional tracking. This means the images that you see in the headset move in sync with your own movements.

This is the key to unlocking the potential of virtual reality, says Hannes Kaufmann at the Technical University of Vienna in Austria. The headset's high-definition display and wraparound field of view are nice additions, he says, but they aren't essential.

The next step, says Kaufmann is to allow people to see their own virtual limbs, not just empty space, in the places where their brain expects them to be. That's why Beijing-based motion capture company Perception, which raised more than $500,000 on Kickstarter in September, is working on a full body suit that gathers body pose estimation and gives haptic feedback – a sense of touch – to the wearer. Software like Fireside Tales will then be able to take your body position into account.

In the future, humans will be able to direct live virtual experiences themselves, says Kaufmann. "Imagine you're meeting an alien in the virtual reality, and you want to shake hands. You could have a real person go in there and shake hands with you, but for you only the alien is present."

Oculus, which was bought by Facebook in July for $2 billion, has not yet announced when the headset will be available to buy.

This article appeared in print under the headline "Deep and meaningful"

Oct 06, 2014

Is the metaverse still alive?

In the last decade, online virtual worlds such as Second Life and alike have become enormously popular. Since their appearance on the technology landscape, many analysts regarded shared 3D virtual spaces as a disruptive innovation, which would have rendered the Web itself obsolete.

This high expectation attracted significant investments from large corporations such as IBM, which started building their virtual spaces and offices in the metaverse. Then, when it became clear that these promises would not be kept, disillusionment set in and virtual worlds started losing their edge. However, this is not a new phenomenon in high-tech, happening over and over again.

The US consulting company Gartner has developed a very popular model to describe this effect, called the “Hype Cycle”. The Hype Cycle provides a graphic representation of the maturity and adoption of technologies and applications.

It consists of five phases, which show how emerging technologies will evolve.

In the first, “technology trigger” phase, a new technology is launched which attracts the interest of media. This is followed by the “peak of inflated expectations”, characterized by a proliferation of positive articles and comments, which generate overexpectations among users and stakeholders.

In the next, “trough of disillusionment” phase, these exaggerated expectations are not fulfilled, resulting in a growing number of negative comments generally followed by a progressive indifference.

In the “slope of enlightenment” the technology potential for further applications becomes more broadly understood and an increasing number of companies start using it.

In the final, “plateau of productivity” stage, the emerging technology established itself as an effective tool and mainstream adoption takes off. 

So what stage in the hype cycle are virtual worlds now?

After the 2006-2007 peak, metaverses entered the downward phase of the hype cycle, progressively loosing media interest, investments and users. Many high-tech analysts still consider this decline an irreversible process.

However, the negative outlook that headed shared virtual worlds into the trough of disillusionment maybe soon reversed. This is thanks to the new interest in virtual reality raised by the Oculus Rift (recently acquired by Facebook for $2 billion), Sony’s Project Morpheus and alike immersive displays, which are still at the takeoff stage in the hype cycle.

Oculus Rift's chief scientist Michael Abrash makes no mystery of the fact that his main ambition has always been to build a metaverse such the one described in Neal Stephenson's (1992) cyberpunk novel Snow Crash. As he writes on the Oculus blog

"Sometime in 1993 or 1994, I read Snow Crash and for the first time thought something like the Metaverse might be possible in my lifetime."

Furthermore, despite the negative comments and deluded expectations, the metaverse keeps attracting new users: in its 10th anniversary on June 23rd 2013, an infographic reported that Second Life had over 1 million users visit around the world monthly, more than 400,000 new accounts per month, and 36 million registered users.

So will Michael Abrash’s metaverse dream come true? Even if one looks into the crystal ball of the hype cycle, the answer is not easily found.

Bionic Vision Australia’s Bionic Eye Gives New Sight to People Blinded by Retinitis Pigmentosa

Via Medgadget

Bionic Vision Australia, a collaboration between of researchers working on a bionic eye, has announced that its prototype implant has completed a two year trial in patients with advanced retinitis pigmentosa. Three patients with profound vision loss received 24-channel suprachoroidal electrode implants that caused no noticeable serious side effects. Moreover, though this was not formally part of the study, the patients were able to see more light and able to distinguish shapes that were invisible to them prior to implantation. The newly gained vision allowed them to improve how they navigated around objects and how well they were able to spot items on a tabletop.

The next step is to try out the latest 44-channel device in a clinical trial slated for next year and then move on to a 98-channel system that is currently in development.

This study is critically important to the continuation of our research efforts and the results exceeded all our expectations,” Professor Mark Hargreaves, Chair of the BVA board, said in a statement. “We have demonstrated clearly that our suprachoroidal implants are safe to insert surgically and cause no adverse events once in place. Significantly, we have also been able to observe that our device prototype was able to evoke meaningful visual perception in patients with profound visual loss.”

Here’s one of the study participants using the bionic eye:

First direct brain-to-brain communication between human subjects

Via KurzweilAI.net

An international team of neuroscientists and robotics engineers have demonstrated the first direct remote brain-to-brain communication between two humans located 5,000 miles away from each other and communicating via the Internet, as reported in a paper recently published in PLOS ONE (open access).

Emitter and receiver subjects with non-invasive devices supporting, respectively, a brain-computer interface (BCI), based on EEG changes, driven by motor imagery (left) and a computer-brain interface (CBI) based on the reception of phosphenes elicited by neuro-navigated TMS (right) (credit: Carles Grau et al./PLoS ONE)

In India, researchers encoded two words (“hola” and “ciao”) as binary strings and presented them as a series of cues on a computer monitor. They recorded the subject’s EEG signals as the subject was instructed to think about moving his feet (binary 0) or hands (binary 1). They then sent the recorded series of binary values in an email message to researchers in France, 5,000 miles away.

There, the binary strings were converted into a series of transcranial magnetic stimulation (TMS) pulses applied to a hotspot location in the right visual occipital cortex that either produced a phosphene (perceived flash of light) or not.

“We wanted to find out if one could communicate directly between two people by reading out the brain activity from one person and injecting brain activity into the second person, and do so across great physical distances by leveraging existing communication pathways,” explains coauthor Alvaro Pascual-Leone, MD, PhD, Director of the Berenson-Allen Center for Noninvasive Brain Stimulation at Beth Israel Deaconess Medical Center (BIDMC) and Professor of Neurology at Harvard Medical School.

 A team of researchers from Starlab Barcelona, Spain and Axilum Robotics, Strasbourg, France conducted the experiment. A second similar experiment was conducted between individuals in Spain and France.

“We believe these experiments represent an important first step in exploring the feasibility of complementing or bypassing traditional language-based or other motor/PNS mediated means in interpersonal communication,” the researchers say in the paper.

“Although certainly limited in nature (e.g., the bit rates achieved in our experiments were modest even by current BCI (brain-computer interface) standards, mostly due to the dynamics of the precise CBI (computer-brain interface) implementation, these initial results suggest new research directions, including the non-invasive direct transmission of emotions and feelings or the possibility of sense synthesis in humans — that is, the direct interface of arbitrary sensors with the human brain using brain stimulation, as previously demonstrated in animals with invasive methods.

Brain-to-brain (B2B) communication system overview. On the left, the BCI subsystem is shown schematically, including electrodes over the motor cortex and the EEG amplifier/transmitter wireless box in the cap. Motor imagery of the feet codes the bit value 0, of the hands codes bit value 1. On the right, the CBI system is illustrated, highlighting the role of coil orientation for encoding the two bit values. Communication between the BCI and CBI components is mediated by the Internet. (Credit: Carles Grau et al./PLoS ONE)

“The proposed technology could be extended to support a bi-directional dialogue between two or more mind/brains (namely, by the integration of EEG and TMS systems in each subject). In addition, we speculate that future research could explore the use of closed mind-loops in which information associated to voluntary activity from a brain area or network is captured and, after adequate external processing, used to control other brain elements in the same subject. This approach could lead to conscious synthetically mediated modulation of phenomena best detected subjectively by the subject, including emotions, pain and psychotic, depressive or obsessive-compulsive thoughts.

“Finally, we anticipate that computers in the not-so-distant future will interact directly with the human brain in a fluent manner, supporting both computer- and brain-to-brain communication routinely. The widespread use of human brain-to-brain technologically mediated communication will create novel possibilities for human interrelation with broad social implications that will require new ethical and legislative responses.”

This work was partly supported by the EU FP7 FET Open HIVE project, the Starlab Kolmogorov project, and the Neurology Department of the Hospital de Bellvitge.

 

Google Glass can now display captions for hard-of-hearing users

Georgia Institute of Technology researchers have created a speech-to-text Android app for Google Glass that displays captions for hard-of-hearing persons when someone is talking to them in person.

“This system allows wearers like me to focus on the speaker’s lips and facial gestures, “said School of Interactive Computing Professor Jim Foley.

“If hard-of-hearing people understand the speech, the conversation can continue immediately without waiting for the caption. However, if I miss a word, I can glance at the transcription, get the word or two I need and get back into the conversation.”

Captioning on Glass display captions for the hard-of-hearing (credit: Georgia Tech)

The “Captioning on Glass” app is now available to install from MyGlass. More information here.

Foley and the students are working with the Association of Late Deafened Adults in Atlanta to improve the program. An iPhone app is planned.

Sep 25, 2014

With Cyberith's Virtualizer, you can run around wearing an Oculus Rift

1 2 3 4 5 6 7 8 Next