Apr 14, 2009
Oct 27, 2007
Re-blogged from Networked Performance
levelHead is an interactive game that uses a cube, a webcam, and pattern recognition. When the cube is rotated or tilted in front of the camera the user will be able to see ‘inside’ the cube and guide a small avatar through six different rooms.
Pattern recognition has already been used in several other projects, but this is a new way of using it, and a new way of thinking of the technology. The idea behind the game itself is rather simple. When the cube is tilted the avatar moves in the corresponding direction. The goal of the game is to guide him through a maze of rooms connected by doors, and lead him to the outside world.
According to the creater, Julian Oliver, the game is currently in development, but will be released as open-source soon.
Check out the explanatory video.
Jul 11, 2007
reBlogged from networked performance
LindenLab, the company behind Second life, hopes to introduce hand-held and wearable systems that act as gateways between the real and virtual worlds. Linden Lab and other virtual worlds also are developing versions that run on existing mobile phones.
Researchers at a recent virtual worlds conference at MIT said that special eyewear, display "badges," and speakers worn about the neck will allow us to live more fully through our avatars - those idealized versions of ourselves that typically boast better proportions than the saggy originals.
Read full article
Mar 16, 2007
Mediamatic organizes a new workshop--Hybrid World Lab--in which the participants develop prototypes for hybrid world media applications. Where the virtual world and the physical world used to be quite separated realms of reality, they are quickly becoming two faces of the same hybrid coin. This workshop investigates the increasingly intimate fusion of digital and physical space from the perspective of a media maker.
The workshop is an intense process in which the participants explore the possibilities of the physical world as interface to online media: location based media, everyday objects as media interfaces, urban screens, and cultural application of RFID technology. Every morning lectures and lessons bring in new perspectives, project presentations and introductions to the hands-on workshop tools. Every afternoon the participants work on their own workshop projects. In 5 workshop days every participant will develop a prototype of a hybrid world media project, assisted by outstanding international trainers and lectures and technical assistants. The workshop closes with a public presentation in which the issues are discussed and the results are shown.
Topics: Some of the topics that will be investigated in this workshop are: Cultural application and impact of RFID technology, internet-of-things. Using RFID in combination with other kinds of sensors. Ubiquitous computing (ubicomp) and ambient intelligence: services and applications that use chips embedded in household appliances and in public space. Locative media tools, car navigation systems, GPS tools, location sensitive mobile phones. The web as interface to the physical world: geotagging and mashupswith Google Maps & Google Earth. Games in hybrid space.
Mar 15, 2007
From Technology Review
Nokia wants to superimpose digital information on the real world using a smart cell phone.
A prototype uses a GPS sensor, a compass, and accelerometers. Using data from these sensors, the phone can calculate the location of just about any object its camera is aimed at:
Last October, a team led by Markus Kähäri unveiled a prototype of the system at the International Symposium on Mixed and Augmented Reality. The team added a GPS sensor, a compass, and accelerometers to a Nokia smart phone. Using data from these sensors, the phone can calculate the location of just about any object its camera is aimed at. Each time the phone changes location, it retrieves the names and geographical coördinates of nearby landmarks from an external database. The user can then download additional information about a chosen location from the Web--say, the names of businesses in the Empire State Building, the cost of visiting the building's observatories, or hours and menus for its five eateries.
Oct 13, 2006
Saturday 21 October 2006, Erasmus Medical Center, Rotterdam: 11.00 - 12.30: Bioinformatics dept., Faculty building, 15th floor 12.30 - 18.00: Sophia Children's Hospital, Cinema 3rd floor.
Test_Lab:/ Immersive Mixed Reality Environments/ is the product of a unique collaboration between the Erasmus Medical Centre and V2_, Institute for the Unstable Media with the aim of opening the dialogue between scientists and artists that apply Virtual Reality in their research and art practice. The event consists of demonstrations by Virtual Reality artists and scientists providing hands-on experiences with Immersive Mixed Reality Environments, and presentations by renowned international speakers presenting the latest in Virtual Reality in science and art. See below for the program details, a description of the projects that will be demonstrated, and the invited speakers that will present their work in the seminar.
Test_Lab is a bi-monthly public event hosted by V2_ that provides an informal setting to demonstrate, test, present, and/or discuss artistic research and development (aRt&D).
The event is free of charge, but registration is required before the 19th of October. For further information and registration please contact Remco Beeskow at email@example.com (tel: +31 (0)10 206 72 72) or Fred Balvert at f.balvert[at]erasmusmc.nl (tel: +31(0)6 41431721). Also visit www.v2.nl and www.erasmusmc.nl
Oct 01, 2006
Re-blogged from information aesthetic
SCACS is a "Social Context-Aware Communication System" that collects information on social networks (i.e. academic co-author relationships networks) & visualizes them on wearable interfaces to facilitate face-to-face communications among people in physical environments. RFID sensors sense the identity of specific people (i.e. authors) nearby, & a wearable computer transforms the complex social network graphs into treemaps, which are then shown as augmented reality on a wearable interface (or head-mounted display).
link: aist-nara.ac.jp (pdf)
Sep 06, 2006
Eli Peli, an ophthalmologist and bioengineer at Harvard Medical School in Boston, has designed an augmented reality device to help patients with tunnel vision, a condition which narrows a person’s field of view.
The system, consisting of glasses fitted with a small camera and a transparent display on one lens, works by superimposing computer-generated images over real scenes.
According to preliminary test results, which will be reported in the September issue of Investigative Ophthalmology & Visual Science, patients who tried the system were able to search objects far more quickly.
Read the original article
Jul 20, 2006
From the project's website
CatchBob is an experimental platform to elicit collaborative behavior of people working together on a mobile activity. Running on a mobile device (iPAQ, TabletPc), it's a collaborative hunt in which groups of three persons have to find and circle a virtual object on our campus.
Videos of CatchBob!:
- The long video (3:30, .mov, 15.8Mb) can be downloaded here.
- The short version (1:20, .mov, 8.3Mb) can be downloaded there.
Jul 05, 2006
The MagicBook explores seamless transition between reality and virtual reality. When users look at the pages of a real book through a hand held display, they are able to see virtual content superimposed over the real pages, that is augmented reality. When they see an augmented reality scene they like, users can fly into the scene and experience it as an immersive virtual environment. Currently the user can transition smoothly between these two fixed viewing modes: the augmented-reality view and the virtual-reality view.
May 01, 2006
Free Network Visible Network is a project that combines different tools and processes to visualize, floating in the space, the interchanged information between users of a network. The people are able to experience in a new exciting way about how colorful virtual objects, representing the digital data, are flying around. These virtual objects will change their shape, size and color in relation with the different characteristics of the information that is circulating in the network.
Mar 07, 2006
"Viennese computer scientist Daniel Wagner has figured out a way to show a virtual character on an i-mate SP5 cellphone, and when you move around with the cellphone, it appears that you're floating around this virtual character in 3D. Other people with cellphones can also see this character from their points of view."
Feb 14, 2006
5 June 2006, Bucharest, Romania
MIXER is an international workshop dedicated to the design and engineering of Mixed Reality Systems. Following on from the first work-shop, MIXER´2004, jointly organized with ACM-IUI´2004 and CADUI´2004, the primary goal of MIXER´2006 is to bring together researchers in the area of Mixed Reality (MR) systems (including augmented reality, augmented virtuality, augmented video, and tangible systems) to identify and articulate key research challenges for the design and engineering of MR systems. Also, in order to develop reliable theoretical foundations for MR systems, there is a need to gather the results of MR research and development to create a body of reusable knowledge for the design and construction of future systems. This body of knowledge may include: - MR design and engineering issues; - Research challenges for design methods and tools; - A corpus of applicable usability knowledge (e.g., guidelines, design heuristics).
MIXER´2006 solicits contributions that discuss: theoretical, methodological, technical or application-oriented considerations, relevant for: MR system design, MR system engineering / implementation, MR systems evaluation - and application-oriented issues, especially arising from new devices, techniques or environments.
Nov 09, 2005
Authors: Nguyen TH, Qui TC, Xu K, Cheok AD, Teo SL, Zhou Z, Mallawaarachchi A, Lee SP, Liu W, Teo HS, Thang le N, Li Y, Kato H
A real-time system for capturing humans in 3D and placing them into a mixed reality environment is presented in this paper. The subject is captured by nine cameras surrounding her. Looking through a head-mounted-display with a camera in front pointing at a marker, the user can see the 3D image of this subject overlaid onto a mixed reality scene. The 3D images of the subject viewed from this viewpoint are constructed using a robust and fast shape-from-silhouette algorithm. The paper also presents several techniques to produce good quality and speed up the whole system. The frame rate of our system is around 25 fps using only standard Intel processor-based personal computers. Besides a remote live 3D conferencing and collaborating system, we also describe an application of the system in art and entertainment, named Magic Land, which is a mixed reality environment where captured avatars of human and 3D computer generated virtual animations can form an interactive story and play with each other. This system demonstrates many technologies in human computer interaction: mixed reality, tangible interaction, and 3D communication. The result of the user study not only emphasizes the benefits, but also addresses some issues of these technologies.
Oct 26, 2005
Gulliver's Box, developed by Mixed Reality Lab, the Human Interface Lab of Osaka, the Ars Electronica Futurelab and Zaxel, is a mixed reality theatrical application which allows users to modify characters and give specific features to them. The environment can be revised and customized according to the desires of the users each time. In dealing with the individual interfaces, players are introduced into mixed reality environments on different levels of interaction.
Gulliver's Box has won a mention at the World Summit Award
Oct 25, 2005
Microsoft researcher Andrew D. Wilson has developed a portable augmented reality system that uses a projector and computer vision technology to display interactive images on any surface such as floors, white board and walls. The system, called PlayAnywhere, is a single portable unit and does not need mounted cameras. Further, the system does not require calibration. Besides edutainment applications, this portable AR system could have interesting applications in the field of neurorehabilitation of brain-injured patients. I have used a similar approach in the project I-Learning. In this project, we developed an augmented-reality display to guide physical and mental practice exercises for patients with upper-limb hemiplegia following stroke.
PlayAnywhere: A Compact Interactive Tabletop Projection-Vision System, Symposium on User Interface Software and Technology (UIST 2005), Seattle, October 23-26, 2005 PDF