Oct 17, 2010
Via Augmented Times
This video shows the results achieved in the paper "Build Your World and Play In It: Interacting with Surface Particles on Complex Objects" presented at the conference ISMAR 2010 by Brett Jones and other researchers from the University of Illinois. The paper presents a way to map virtual content on 3d physical constructions and "play" with them. Nice stuff.
Aug 27, 2010
Keiichi Matsuda did it again. After the success of Domestic Robocop, the architecture graduate and filmaker got the nomination for the Royal Institute of British Architects (RIBA) Silver Medal award, for his new video "Augmented City". As in his previous work, in this new video Matsuda describes a future world overlaid with digital information, whose built environment can be manipulated by the individual. In this way, the objective physical world is transformed in a subjective virtual space.
In Matsuda's own words:
Augmented City explores the social and spatial implications of an AR-supported future. 'Users' of the city can browse through channels of the augmented city, creating aggregated customised environments. Identity is constructed and broadcast, while local records and coupons litter the streets. The augmented city is an architectural construct modulated by the user, a liquid city of stratified layers that perceptually exists in the space between the self and the built environment. This subjective space allows us to re-evaluate current trends, and examine our future occupation of the augmented city.
Mar 01, 2010
Via Augmented Times
In this concept demo, the user takes a picture of an historical building and sees the image merged with an historical image.
Feb 04, 2010
Yet another nice video about AR
This great video by Keiichi Matsuda shows how augmented reality "may recontextualise the functions of consumerism and architecture, and change in the way in which we operate within it". The scenario is also interesting because it suggests how AR may be (ab)used by commercial companies. On the other hand, it is difficult to imagine how AR could go mainstream without them... of course any suggestion is welcome.
Sep 28, 2009
Bionic Eye is the first augmented reality application developed for the iPhone 3GS. A brainchild of Dutch start-up Layar, Bionic Eye enables you to visualize Points of Interest (POI) located in your nearby environment in the US.
POI databases include restaurants, WiFi hotspots, subway stations (New York Subway, Washington Metro, Chicago L Rapid Transit), etc. Over 100.000 POI are already included in this application. Elements located at a distance less than 1km (0,621miles) only will be displayed on the screen.
Sep 25, 2009
Via Pink Tentacle
Miruko is a camera robot in the shape of an eyeball capable of tracking objects and faces. Worn on the player’s sleeve, Miruko’s roving eye scans the surroundings in search of virtual monsters that are invisible to the naked human eye. When a virtual monster is spotted, the mechanical eyeball rolls around in its socket and fixes its gaze on the monster’s location. By following Miruko’s line of sight, the player is able to locate the virtual monster and “capture” it via his or her iPhone camera.
In this video, Miruko’s creators demonstrate how the robotic eyeball can be used as an interface for a virtual monster-hunting game played in a real-world environment.
According to its creators, Miruko can be used for augmented reality games, security, and navigation.
Jun 05, 2009
General Electric has a great mini-site up showcasing their newest energy services and smarter power management tools.
But the most intriguing part of the site is the augmented reality applications that you can play with using your computer’s webcam.
What you do is you print out a piece of paper that the webcam “sees” and GE’s augmented reality program builds a virtual hologram.
Check out the demo video and then try the AR apps here
Apr 14, 2009
Mar 16, 2007
Egocentric depth judgments in optical, see-through augmented reality.
IEEE Trans Vis Comput Graph. 2007 May-Jun;13(3):429-42
Authors: Swan Ii JE, Jones A, Kolstad E, Livingston MA, Smallman HS
Abstract-A fundamental problem in optical, see-through augmented reality (AR) is characterizing how it affects the perception of spatial layout and depth. This problem is important because AR system developers need to both place graphics in arbitrary spatial relationships with real-world objects, and to know that users will perceive them in the same relationships. Furthermore, AR makes possible enhanced perceptual techniques that have no real-world equivalent, such as x-ray vision, where AR users are supposed to perceive graphics as being located behind opaque surfaces. This paper reviews and discusses protocols for measuring egocentric depth judgments in both virtual and augmented environments, and discusses the well-known problem of depth underestimation in virtual environments. It then describes two experiments that measured egocentric depth judgments in AR. Experiment I used a perceptual matching protocol to measure AR depth judgments at medium and far-field distances of 5 to 45 meters. The experiment studied the effects of upper versus lower visual field location, the x-ray vision condition, and practice on the task. The experimental findings include evidence for a switch in bias, from underestimating to overestimating the distance of AR-presented graphics, at sim23 meters, as well as a quantification of how much more difficult the x-ray vision condition makes the task. Experiment II used blind walking and verbal report protocols to measure AR depth judgments at distances of 3 to 7 meters. The experiment examined real-world objects, real-world objects seen through the AR display, virtual objects, and combined real and virtual objects. The results give evidence that the egocentric depth of AR objects is underestimated at these distances, but to a lesser degree than has previously been found for most virtual reality environments. The results are consistent with previous studies that have implicated a restricted field-of-view, combined with an inability for observers to scan the ground plane in a near-to-far direction, as explanations for the observed depth underestimation.
Deadline: Friday April 27, 2007 :: Contributions are welcomed for a new book addressing the construction and interpretation of virtual artefacts within virtual world museums and within physical museum spaces. Particular emphasis is placed on theories of spatiality and strategies of interpretation.
The editors seek papers that intervene in critical discourses surrounding virtual reality and virtual artefacts, to explore the rapidly changing temporal, spatial and theoretical boundaries of contemporary museum display practice. We are especially interested in spatiality as it is employed in the construction of virtual artefacts, as well as the roles these spaces enact as signifiers of historical narrative and sites of social interaction.
We are also interested in the relationship between real-world museums and virtual world museums, with a view to interrogating the construction of meaning within, across and between both. We welcome original scholarly contributions on the topic of new cultural practices and communities related to virtual reality in the context of museum display practice. Papers might address, but are in no way limited to, the following:
* Authenticity and artificiality
* Exploration and discovery
* Physical vs virtual
* Representation/interpretation of virtual reality artefacts - as 3D spaces on screen or in a physical gallery
* Museum visiting in virtual space
* Representation of physical museum spaces in virtual worlds and their relationship to cultural definitions of museum spaces.
Please send a proposal of 500-750 words and a contributor's bio by Friday
April 27, 2007. Authors will be notified by Thursday May 31, 2007. Final drafts of papers are due by Monday October 1, 2007.
Please send your proposal to:
Strategic Research Unit
113 Chancery Lane
London WC2A 1PL
Or via email: tara.chittenden[at]lawsociety.org.uk
Mediamatic organizes a new workshop--Hybrid World Lab--in which the participants develop prototypes for hybrid world media applications. Where the virtual world and the physical world used to be quite separated realms of reality, they are quickly becoming two faces of the same hybrid coin. This workshop investigates the increasingly intimate fusion of digital and physical space from the perspective of a media maker.
The workshop is an intense process in which the participants explore the possibilities of the physical world as interface to online media: location based media, everyday objects as media interfaces, urban screens, and cultural application of RFID technology. Every morning lectures and lessons bring in new perspectives, project presentations and introductions to the hands-on workshop tools. Every afternoon the participants work on their own workshop projects. In 5 workshop days every participant will develop a prototype of a hybrid world media project, assisted by outstanding international trainers and lectures and technical assistants. The workshop closes with a public presentation in which the issues are discussed and the results are shown.
Topics: Some of the topics that will be investigated in this workshop are: Cultural application and impact of RFID technology, internet-of-things. Using RFID in combination with other kinds of sensors. Ubiquitous computing (ubicomp) and ambient intelligence: services and applications that use chips embedded in household appliances and in public space. Locative media tools, car navigation systems, GPS tools, location sensitive mobile phones. The web as interface to the physical world: geotagging and mashupswith Google Maps & Google Earth. Games in hybrid space.
Nov 24, 2006
prompted by Layla Nassary Zadeh, I wanted to understand something more about the possibility of implementing augmented reality on mobile devices. Much to my surprise, this field is more advanced than I expected.
For example, a team of researchers from Nokia's Mobile Augmented Reality Applications (MARA) project has created a prototype phone that makes objects in the real world hyperlink to information on the Internet. Using the phone's built in camera, a user can highlight objects on the mobile phone's LCD and pull in additional information about them from the Internet. Moreover, by altering the orientation of the phone, the display will toggle between live view and satellite map view. In map view, nearby real world objects are highlighted for convenient reference.
The prototype consists of Nokia S60 platform phone and attached external sensor box providing position and orientation information to the phone via a Bluetooth connection
This video of downtown Helsinki shows some Virtual Object - associated landmarks, and demonstrates the automatic switching between Augmented Reality mode and Map mode that happens when the user alternates between holding the phone vertically and horizontally.
MARA was demonstrated at the fifth IEEE and ACM International Symposium on Mixed and Augmented Reality in Santa Barbara in October.
Oct 13, 2006
Saturday 21 October 2006, Erasmus Medical Center, Rotterdam: 11.00 - 12.30: Bioinformatics dept., Faculty building, 15th floor 12.30 - 18.00: Sophia Children's Hospital, Cinema 3rd floor.
Test_Lab:/ Immersive Mixed Reality Environments/ is the product of a unique collaboration between the Erasmus Medical Centre and V2_, Institute for the Unstable Media with the aim of opening the dialogue between scientists and artists that apply Virtual Reality in their research and art practice. The event consists of demonstrations by Virtual Reality artists and scientists providing hands-on experiences with Immersive Mixed Reality Environments, and presentations by renowned international speakers presenting the latest in Virtual Reality in science and art. See below for the program details, a description of the projects that will be demonstrated, and the invited speakers that will present their work in the seminar.
Test_Lab is a bi-monthly public event hosted by V2_ that provides an informal setting to demonstrate, test, present, and/or discuss artistic research and development (aRt&D).
The event is free of charge, but registration is required before the 19th of October. For further information and registration please contact Remco Beeskow at firstname.lastname@example.org (tel: +31 (0)10 206 72 72) or Fred Balvert at f.balvert[at]erasmusmc.nl (tel: +31(0)6 41431721). Also visit www.v2.nl and www.erasmusmc.nl
Oct 01, 2006
Re-blogged from information aesthetic
SCACS is a "Social Context-Aware Communication System" that collects information on social networks (i.e. academic co-author relationships networks) & visualizes them on wearable interfaces to facilitate face-to-face communications among people in physical environments. RFID sensors sense the identity of specific people (i.e. authors) nearby, & a wearable computer transforms the complex social network graphs into treemaps, which are then shown as augmented reality on a wearable interface (or head-mounted display).
link: aist-nara.ac.jp (pdf)
Jul 05, 2006
The MagicBook explores seamless transition between reality and virtual reality. When users look at the pages of a real book through a hand held display, they are able to see virtual content superimposed over the real pages, that is augmented reality. When they see an augmented reality scene they like, users can fly into the scene and experience it as an immersive virtual environment. Currently the user can transition smoothly between these two fixed viewing modes: the augmented-reality view and the virtual-reality view.