By continuing your visit to this site, you accept the use of cookies. These ensure the smooth running of our services. Learn more.

Apr 27, 2016

Predictive Technologies: Can Smart Tools Augment the Brain's Predictive Abilities?

Predictive Technologies: Can Smart Tools Augment the Brain's Predictive Abilities?
Pezzullo, G., D'Ausilio, A., Gaggioli, A. 
Frontiers in Neuroscience 10(186) · May 2016 
Abstract. The ability of “looking into the future”—namely, the capacity of anticipating future states of the environment or of the body—represents a fundamental function of human (and animal) brains. A goalkeeper who tries to guess the ball’s direction; a chess player who attempts to anticipate the opponent’s next move; or a man-in-love who tries to calculate what are the chances of her saying yes—in all these cases, people are simulating possible future states of the world, in order to maximize the success of their decisions or actions. Research in neuroscience is showing that our ability to predict the behavior of physical or social phenomena is largely dependent on the brain’s ability to integrate current and past information to generate (probabilistic) simulations of the future. But could predictive processing be augmented using advanced technologies? In this contribution, we discuss how computational technologies may be used to support, facilitate or enhance the prediction of future events, by considering exemplificative scenarios across different domains, from simpler sensorimotor decisions to more complex cognitive tasks. We also examine the key scientific and technical challenges that must be faced to turn this vision into reality.

Download full text

Oct 06, 2014

Google Glass can now display captions for hard-of-hearing users

Georgia Institute of Technology researchers have created a speech-to-text Android app for Google Glass that displays captions for hard-of-hearing persons when someone is talking to them in person.

“This system allows wearers like me to focus on the speaker’s lips and facial gestures, “said School of Interactive Computing Professor Jim Foley.

“If hard-of-hearing people understand the speech, the conversation can continue immediately without waiting for the caption. However, if I miss a word, I can glance at the transcription, get the word or two I need and get back into the conversation.”

Captioning on Glass display captions for the hard-of-hearing (credit: Georgia Tech)

The “Captioning on Glass” app is now available to install from MyGlass. More information here.

Foley and the students are working with the Association of Late Deafened Adults in Atlanta to improve the program. An iPhone app is planned.

Jun 30, 2014

Never do a Tango with an Eskimo

Apr 06, 2014

The effects of augmented visual feedback during balance training in Parkinson's disease - trial protocol

The effects of augmented visual feedback during balance training in Parkinson's disease: study design of a randomized clinical trial.

BMC Neurol. 2013;13:137

Authors: van den Heuvel MR, van Wegen EE, de Goede CJ, Burgers-Bots IA, Beek PJ, Daffertshofer A, Kwakkel G

Abstract. BACKGROUND: Patients with Parkinson's disease often suffer from reduced mobility due to impaired postural control. Balance exercises form an integral part of rehabilitative therapy but the effectiveness of existing interventions is limited. Recent technological advances allow for providing enhanced visual feedback in the context of computer games, which provide an attractive alternative to conventional therapy. The objective of this randomized clinical trial is to investigate whether a training program capitalizing on virtual-reality-based visual feedback is more effective than an equally-dosed conventional training in improving standing balance performance in patients with Parkinson's disease.
METHODS/DESIGN: Patients with idiopathic Parkinson's disease will participate in a five-week balance training program comprising ten treatment sessions of 60 minutes each. Participants will be randomly allocated to (1) an experimental group that will receive balance training using augmented visual feedback, or (2) a control group that will receive balance training in accordance with current physical therapy guidelines for Parkinson's disease patients. Training sessions consist of task-specific exercises that are organized as a series of workstations. Assessments will take place before training, at six weeks, and at twelve weeks follow-up. The functional reach test will serve as the primary outcome measure supplemented by comprehensive assessments of functional balance, posturography, and electroencephalography. DISCUSSION: We hypothesize that balance training based on visual feedback will show greater improvements on standing balance performance than conventional balance training. In addition, we expect that learning new control strategies will be visible in the co-registered posturographic recordings but also through changes in functional connectivity.

Dec 08, 2013


Take back your mornings with the iMirror – the interactive mirror for your home. Watch the video for a live demo!

Nov 28, 2013

OutRun: Augmented Reality Driving Video Game

Garnet Hertz's video game concept car combines a car-shaped arcade game cabinet with a real world electric vehicle to produce a video game system that actually drives. OutRun offers a unique mixed reality simulation as one physically drives through an 8-bit video game. The windshield of the system features custom software that transforms the real world into an 8-bit video game, enabling the user to have limitless gameplay opportunities while driving. Hertz has designed OutRun to de-simulate the driving component of a video game: where game simulations strive to be increasingly realistic (usually focused on graphics), this system pursues "real" driving through the game. Additionally, playing off the game-like experience one can have driving with an automobile navigation system, OutRun explores the consequences of using only a computer model of the world as a navigation tool for driving.

More info: http://conceptlab.com/outrun/

Aug 07, 2013

OpenGlass Project Makes Google Glass Useful for the Visually Impaired

Re-blogged from Medgadget

Google Glass may have been developed to transform the way people see the world around them, but thanks to Dapper Vision’s OpenGlass Project, one doesn’t even need to be able to see to experience the Silicon Valley tech giant’s new spectacles.

Harnessing the power of Google Glass’ built-in camera, the cloud, and the “hive-mind”, visually impaired users will be able to know what’s in front of them. The system consists of two components: Question-Answer sends pictures taken by the user and uploads them to Amazon’s Mechanical Turk and Twitter for the public to help identify, and Memento takes video from Glass and uses image matching to identify objects from a database created with the help of seeing users. Information about what the Glass wearer “sees” is read aloud to the user via bone conduction speakers.

Here’s a video that explains more about how it all works:

Hive-mind solves tasks using Google Glass ant game

Re-blogged from New Scientist

Glass could soon be used for more than just snapping pics of your lunchtime sandwich. A new game will connect Glass wearers to a virtual ant colony vying for prizes by solving real-world problems that vex traditional crowdsourcing efforts.

Crowdsourcing is most famous for collaborative projects like Wikipedia and "games with a purpose" like FoldIt, which turns the calculations involved in protein folding into an online game. All require users to log in to a specific website on their PC.

Now Daniel Estrada of the University of Illinois in Urbana-Champaign and Jonathan Lawhead of Columbia University in New York are seeking to bring crowdsourcing to Google's wearable computer, Glass.

The pair have designed a game called Swarm! that puts a Glass wearer in the role of an ant in a colony. Similar to the pheromone trails laid down by ants, players leave virtual trails on a map as they move about. These behave like real ant trails, fading away with time unless reinforced by other people travelling the same route. Such augmented reality games already exist – Google's Ingress, for one – but in Swarm! the tasks have real-world applications.

Swarm! players seek out virtual resources to benefit their colony, such as food, and must avoid crossing the trails of other colony members. They can also monopolise a resource pool by taking photos of its real-world location.

To gain further resources for their colony, players can carry out real-world tasks. For example, if the developers wanted to create a map of the locations of every power outlet in an airport, they could reward players with virtual food for every photo of a socket they took. The photos and location data recorded by Glass could then be used to generate a map that anyone could use. Such problems can only be solved by people out in the physical world, yet the economic incentives aren't strong enough for, say, the airport owner to provide such a map.

Estrada and Lawhead hope that by turning tasks such as these into games, Swarm! will capture the group intelligence ant colonies exhibit when they find the most efficient paths between food sources and the home nest.

Read full story

Jul 30, 2013

How it feels (through Google Glass)

Jul 23, 2013

Augmented Reality - Projection Mapping

Augmented Reality - Projection Mapping from Dane Luttik on Vimeo.

Oct 27, 2012

3D Projection Mapping

Mar 31, 2012

Hyper(reality) - The Last Tuesday Society

Project's description: Embodying the concept theorized by hyperrealism theories, the helmet provides a digital experience, immersing the user in an alternative version of reality seen through the helmet. Instead of having a static point of view, the user becomes able to navigate through the 3D environment enabling new behaviours specific to the hyperreal world while still having to physically interact with the real environment. Thus it creates an odd interface between these two states.

Hyper(reality) - The Last Tuesday Society from Maxence

The suit is composed of an helmet with high definition video glasses, an arduino glove with force sensors controlling the 3D view and a harness for the kinect. Each user experience is recorded and analysed, portraiting user behaviours during the experience. Immersed into this dream-like virtual space, the user gradually discovers the collection of curiosities. Behaviours are being modified, the notion of scale is being distorted, all this pushing the boundaries of the physical space. Venitian masks, stuffed animals and old scultpures start floating in the air around the user creating a new sensorial experience.

OutRun: Augmented Reality Driving Video Game

Oct 17, 2010

Mapping virtual content on 3d physical constructions

Via Augmented Times

This video shows the results achieved in the paper "Build Your World and Play In It: Interacting with Surface Particles on Complex Objects" presented at the conference ISMAR 2010 by Brett Jones and other researchers from the University of Illinois. The paper presents a way to map virtual content on 3d physical constructions and "play" with them. Nice stuff.

Aug 27, 2010

Augmented City

Keiichi Matsuda did it again. After the success of Domestic Robocop, the architecture graduate and filmaker got the nomination for the Royal Institute of British Architects (RIBA) Silver Medal award, for his new video "Augmented City". As in his previous work, in this new video Matsuda describes a future world overlaid with digital information, whose built environment can be manipulated by the individual. In this way, the objective physical world is transformed in a subjective virtual space.

In Matsuda's own words:

Augmented City explores the social and spatial implications of an AR-supported future. 'Users' of the city can browse through channels of the augmented city, creating aggregated customised environments. Identity is constructed and broadcast, while local records and coupons litter the streets. The augmented city is an architectural construct modulated by the user, a liquid city of stratified layers that perceptually exists in the space between the self and the built environment. This subjective space allows us to re-evaluate current trends, and examine our future occupation of the augmented city.


Mar 01, 2010

LevelHead v1.0

Via: Lorenzo Romeo

LevelHead, an augmented-reality spatial-memory game by Julian Oliver.

Augmented Reality Tattoos

Via Sketchin

ThinkAnApp’s augmented reality tattoo transforms into a wing-flapping dragon when viewed via webcam.

Concept demo of an AR application

Via Augmented Times

In this concept demo, the user takes a picture of an historical building and sees the image merged with an historical image.

Feb 04, 2010

Yet another nice video about AR

Yet another nice video about AR

Augmented (hyper)Reality

Via Leandeer

This great video by Keiichi Matsuda shows how augmented reality "may recontextualise the functions of consumerism and architecture, and change in the way in which we operate within it". The scenario is also interesting because it suggests how AR may be (ab)used by commercial companies. On the other hand, it is difficult to imagine how AR could go mainstream without them... of course any suggestion is welcome.

Augmented (hyper)Reality: Domestic Robocop from Keiichi Matsuda on Vimeo.

1 2 Next