Jun 30, 2014
Apr 06, 2014
The effects of augmented visual feedback during balance training in Parkinson's disease - trial protocol
The effects of augmented visual feedback during balance training in Parkinson's disease: study design of a randomized clinical trial.
BMC Neurol. 2013;13:137
Authors: van den Heuvel MR, van Wegen EE, de Goede CJ, Burgers-Bots IA, Beek PJ, Daffertshofer A, Kwakkel G
Abstract. BACKGROUND: Patients with Parkinson's disease often suffer from reduced mobility due to impaired postural control. Balance exercises form an integral part of rehabilitative therapy but the effectiveness of existing interventions is limited. Recent technological advances allow for providing enhanced visual feedback in the context of computer games, which provide an attractive alternative to conventional therapy. The objective of this randomized clinical trial is to investigate whether a training program capitalizing on virtual-reality-based visual feedback is more effective than an equally-dosed conventional training in improving standing balance performance in patients with Parkinson's disease.
METHODS/DESIGN: Patients with idiopathic Parkinson's disease will participate in a five-week balance training program comprising ten treatment sessions of 60 minutes each. Participants will be randomly allocated to (1) an experimental group that will receive balance training using augmented visual feedback, or (2) a control group that will receive balance training in accordance with current physical therapy guidelines for Parkinson's disease patients. Training sessions consist of task-specific exercises that are organized as a series of workstations. Assessments will take place before training, at six weeks, and at twelve weeks follow-up. The functional reach test will serve as the primary outcome measure supplemented by comprehensive assessments of functional balance, posturography, and electroencephalography. DISCUSSION: We hypothesize that balance training based on visual feedback will show greater improvements on standing balance performance than conventional balance training. In addition, we expect that learning new control strategies will be visible in the co-registered posturographic recordings but also through changes in functional connectivity.
Dec 08, 2013
Take back your mornings with the iMirror – the interactive mirror for your home. Watch the video for a live demo!
Nov 28, 2013
Garnet Hertz's video game concept car combines a car-shaped arcade game cabinet with a real world electric vehicle to produce a video game system that actually drives. OutRun offers a unique mixed reality simulation as one physically drives through an 8-bit video game. The windshield of the system features custom software that transforms the real world into an 8-bit video game, enabling the user to have limitless gameplay opportunities while driving. Hertz has designed OutRun to de-simulate the driving component of a video game: where game simulations strive to be increasingly realistic (usually focused on graphics), this system pursues "real" driving through the game. Additionally, playing off the game-like experience one can have driving with an automobile navigation system, OutRun explores the consequences of using only a computer model of the world as a navigation tool for driving.
More info: http://conceptlab.com/outrun/
Aug 07, 2013
Re-blogged from Medgadget
Google Glass may have been developed to transform the way people see the world around them, but thanks to Dapper Vision’s OpenGlass Project, one doesn’t even need to be able to see to experience the Silicon Valley tech giant’s new spectacles.
Harnessing the power of Google Glass’ built-in camera, the cloud, and the “hive-mind”, visually impaired users will be able to know what’s in front of them. The system consists of two components: Question-Answer sends pictures taken by the user and uploads them to Amazon’s Mechanical Turk and Twitter for the public to help identify, and Memento takes video from Glass and uses image matching to identify objects from a database created with the help of seeing users. Information about what the Glass wearer “sees” is read aloud to the user via bone conduction speakers.
Here’s a video that explains more about how it all works:
Re-blogged from New Scientist
Glass could soon be used for more than just snapping pics of your lunchtime sandwich. A new game will connect Glass wearers to a virtual ant colony vying for prizes by solving real-world problems that vex traditional crowdsourcing efforts.
Crowdsourcing is most famous for collaborative projects like Wikipedia and "games with a purpose" like FoldIt, which turns the calculations involved in protein folding into an online game. All require users to log in to a specific website on their PC.
The pair have designed a game called Swarm! that puts a Glass wearer in the role of an ant in a colony. Similar to the pheromone trails laid down by ants, players leave virtual trails on a map as they move about. These behave like real ant trails, fading away with time unless reinforced by other people travelling the same route. Such augmented reality games already exist – Google's Ingress, for one – but in Swarm! the tasks have real-world applications.
Swarm! players seek out virtual resources to benefit their colony, such as food, and must avoid crossing the trails of other colony members. They can also monopolise a resource pool by taking photos of its real-world location.
To gain further resources for their colony, players can carry out real-world tasks. For example, if the developers wanted to create a map of the locations of every power outlet in an airport, they could reward players with virtual food for every photo of a socket they took. The photos and location data recorded by Glass could then be used to generate a map that anyone could use. Such problems can only be solved by people out in the physical world, yet the economic incentives aren't strong enough for, say, the airport owner to provide such a map.
Estrada and Lawhead hope that by turning tasks such as these into games, Swarm! will capture the group intelligence ant colonies exhibit when they find the most efficient paths between food sources and the home nest.
Read full story
Jul 30, 2013
Jul 23, 2013
Oct 27, 2012
Mar 31, 2012
Project's description: Embodying the concept theorized by hyperrealism theories, the helmet provides a digital experience, immersing the user in an alternative version of reality seen through the helmet. Instead of having a static point of view, the user becomes able to navigate through the 3D environment enabling new behaviours specific to the hyperreal world while still having to physically interact with the real environment. Thus it creates an odd interface between these two states.
The suit is composed of an helmet with high definition video glasses, an arduino glove with force sensors controlling the 3D view and a harness for the kinect. Each user experience is recorded and analysed, portraiting user behaviours during the experience. Immersed into this dream-like virtual space, the user gradually discovers the collection of curiosities. Behaviours are being modified, the notion of scale is being distorted, all this pushing the boundaries of the physical space. Venitian masks, stuffed animals and old scultpures start floating in the air around the user creating a new sensorial experience.
Oct 17, 2010
Via Augmented Times
This video shows the results achieved in the paper "Build Your World and Play In It: Interacting with Surface Particles on Complex Objects" presented at the conference ISMAR 2010 by Brett Jones and other researchers from the University of Illinois. The paper presents a way to map virtual content on 3d physical constructions and "play" with them. Nice stuff.
Aug 27, 2010
Keiichi Matsuda did it again. After the success of Domestic Robocop, the architecture graduate and filmaker got the nomination for the Royal Institute of British Architects (RIBA) Silver Medal award, for his new video "Augmented City". As in his previous work, in this new video Matsuda describes a future world overlaid with digital information, whose built environment can be manipulated by the individual. In this way, the objective physical world is transformed in a subjective virtual space.
In Matsuda's own words:
Augmented City explores the social and spatial implications of an AR-supported future. 'Users' of the city can browse through channels of the augmented city, creating aggregated customised environments. Identity is constructed and broadcast, while local records and coupons litter the streets. The augmented city is an architectural construct modulated by the user, a liquid city of stratified layers that perceptually exists in the space between the self and the built environment. This subjective space allows us to re-evaluate current trends, and examine our future occupation of the augmented city.
Mar 01, 2010
Via Augmented Times
In this concept demo, the user takes a picture of an historical building and sees the image merged with an historical image.
Feb 04, 2010
Yet another nice video about AR
This great video by Keiichi Matsuda shows how augmented reality "may recontextualise the functions of consumerism and architecture, and change in the way in which we operate within it". The scenario is also interesting because it suggests how AR may be (ab)used by commercial companies. On the other hand, it is difficult to imagine how AR could go mainstream without them... of course any suggestion is welcome.
Sep 28, 2009
Bionic Eye is the first augmented reality application developed for the iPhone 3GS. A brainchild of Dutch start-up Layar, Bionic Eye enables you to visualize Points of Interest (POI) located in your nearby environment in the US.
POI databases include restaurants, WiFi hotspots, subway stations (New York Subway, Washington Metro, Chicago L Rapid Transit), etc. Over 100.000 POI are already included in this application. Elements located at a distance less than 1km (0,621miles) only will be displayed on the screen.
Sep 21, 2009
Cool video by Nokia Future Tech lab on the next generation of Mixed Reality gadgets.. gaze-tracking eyewear that allows browsing and selecting with your eyes; 3-D audio to find and hear spatialized sounds... and more.
check it out: