I would like to thank everyone involved but particularly the students and speakers who made this a very successful and enjoyable event. See you all next year in Paris!
For the first time, a person lying in an fMRI machine has controlled a robot hundreds of kilometers away using thought alone.
"The ultimate goal is to create a surrogate, like in Avatar, although that’s a long way off yet,” says Abderrahmane Kheddar, director of the joint robotics laboratory at the National Institute of Advanced Industrial Science and Technology in Tsukuba, Japan.
Teleoperated robots, those that can be remotely controlled by a human, have been around for decades. Kheddar and his colleagues are going a step further. “True embodiment goes far beyond classical telepresence, by making you feel that the thing you are embodying is part of you,” says Kheddar. “This is the feeling we want to reach.”
To attempt this feat, researchers with the international Virtual Embodiment and Robotic Re-embodiment project used fMRI to scan the brain of university student Tirosh Shapira as he imagined moving different parts of his body. He attempted to direct a virtual avatar by thinking of moving his left or right hand or his legs.
The scanner works by measuring changes in blood flow to the brain’s primary motor cortex, and using this the team was able to create an algorithm that could distinguish between each thought of movement (see diagram). The commands were then sent via an internet connection to a small robot at the Béziers Technology Institute in France.
The set-up allowed Shapira to control the robot in near real time with his thoughts, while a camera on the robot’s head allowed him to see from the robot’s perspective. When he thought of moving his left or right hand, the robot moved 30 degrees to the left or right. Imagining moving his legs made the robot walk forward.
Read further at: http://www.newscientist.com/article/mg21528725.900-robot-avatar-body-controlled-by-thought-alone.html
Illustrations from Microsoft's patent show the rough schematics for both a helmet-based display and one embedded in a pair of glasses (credit: Microsoft)
A New Microsoft patent reveals that they’ve been working two styles of headset: an aviation styled helmet aimed at Xbox gamers, and one that resembles a pair of sunglasses for use with smartphones, MP3 players and other future devices.
In the patent, Microsoft states that a compact display system may be coupled into goggles, a helmet, or other eyewear. These configurations enable the wearer to view images from a computer, media player, or other electronic device with privacy and mobility. When adapted to display two different images concurrently — one for each eye — the system may be used for stereoscopic display (e.g., virtual-reality) applications.
The specific objectives of the Summer School are to provide selected and highly-motivated participants hands-on experience with question-driven Human-Computer Confluence projects, applications and experimental paradigms, as well as to gather project leaders’ researchers and students working together on a list of inter-disciplinary challenges in the field of HCC. Participants will be assigned to different teams, working creatively and collaboratively on specific topics of interest.
The 1st Summer School will be addressed up to 40 Ph.D. students attendees, interested in the emerging symbiotic relation between humans and computing devices. There is no registration fee for the Summer School and financial aid will be available for a significant number of students towards travel and accommodation.
About Human Computer Confluence
HCC, Human-Computer Confluence, is an ambitious research program funded by the EU, studying how the emerging symbiotic relation between humans and computing devices can enable radically new forms of sensing, perception, interaction, and understanding.
The initiative aims to investigate and demonstrate new possibililities emerging at the confluence between the human and technological realms. It will examine new modalities for individual and group perception, actions and experience in augmented, virtual spaces. Such virtual spaces would span the virtual reality continuum, also extending to purely synthetic but believable representation of massive, complex and dynamic data. Human-Computer confluence fosters inter-disciplinary research (such as Presence, neuroscience, machine learning and computer science) towards delivering unified experiences and inventing radically new forms of perception/action.
BirdBrain Technologies, a spin-off of Carnegie Mellon University has developed a device called Brainlink that allows users to remotely control robots and other gadgets (including TVs, cable boxes, and DVD players) with an Android-based smartphone. This is achieved through a small triangular controller that you attach to the gadget, with a Bluetooth range of 30 feet.
Researchers at MIT have developed a new gesture-based system that combines a standard webcam, colored lycra gloves, and a software that includes a dataset of pictures. This simple and cheap system allows to translate hands gestures into a computer-generated 3d-model of the hand in realtime. Once the webcam has captured an image of the glove, the software matches it with the corresponding hand position stored in the visual dataset and triggers the answer. This approach reduces computation time as there is no need to calculate the relative positions of the fingers, palm, and back of the hand on the fly.
The inexpensive gesture-based recognition system developed at MIT could have applications in games, industry and education. I also envisage a potential application in the field of motor rehabilitation.
Researchers at Carnegie Mellon University and Microsoft’s Redmond research lab have developed a working prototype of a system called Skinput that effectively turns your body surface into both screen and input device.
Skinput makes use of a microchip-sized pico projector embedded in an armband to beam an image onto a user’s forearm or hand. When the user taps a menu item or other control icon on the skin, an acoustic detector also in the armband analyzes the ultralow-frequency sound to determine which region of the display has been activated.
The technology behind Skinput is described in this paper the group will present in April at the Computer-Human Interaction conference in Atlanta.
Researchers at the Massachusetts Institute of Technology have created a working prototype of a bidirectional LCD (captures and displays images) that allows a viewer to control on-screen objects without the need for any peripheral controllers or even touching the screen. In near Minority Report fashion, interaction is possible with just a wave of the hand.
The BiDi is inspired by emerging LCDs that use embedded optical sensors to detect multiple points of contact and exploits the spatial light modulation capability of LCDs to allow lensless imaging without interfering with display functionality. According to MIT researchers, this technology can lead to a wide range of applications, such as in-air gesture control of everything from CE devices like mobile phones to flat-panel TVs.
In this very interesting keynote given at the recent Game Developers Conference, Jane McGonigal discusses the role of Positive Psychology in gaming. Another significant sign of how the world of ICT is embracing the perspective of Positive Technology...
The Reactable is a revolutionary new electronic musical instrument designed to create and perform the music of today and tomorrow. It combines state of the art technologies with a simple and intuitive design, which enables musicians to experiment with sound, change its structure, control its parameters and be creative in a direct and refreshing way, unlike anything you have ever known before.
The Reactable uses a so called tangible interface, where the musician controls the system by manipulating tangible objects. The instrument is based on a translucent and luminous round table, and by putting these pucks on the Reactable surface, by turning them and connecting them to each other, performers can combine different elements like synthesizers, effects, sample loops or control elements in order to create a unique and flexible composition.
As soon as any puck is placed on the surface, it is illuminated and starts to interact with the other neighboring pucks, according to their positions and proximity. These interactions are visible on the table surface which acts as a screen, giving instant feedback about what is currently going on in the Reactable turning music into something visible and tangible.
Additionally, performers can also change the behavior of the objects by touching and interacting with the table surface, and because the Reactable technology is “multi-touch”, there is not limit to the number of fingers that can be used simultaneously. As a matter of fact, the Reactable was specially designed so that it could also be used by several performers at the same time, thus opening up a whole new universe of pedagogical, entertaining and creative possibilities with its collaborative and multi-user capabilities
I am probably not the first to post about Microsoft's NATAL project, but who cares?
The fact is, I literally lack the words to express how deep I am impressed by this new gaming technology.
I have no idea if/when this product will come to the shops, but it's hard to believe that Microsoft will have any more competitors in the game industry after its launch.
Announced during Microsoft's annual E3 press conference, Project Natal is the point of arrival of several years of r&d by an Israeli start-up called 3DV Systems, which Microsoft recently acquired. Microsoft Xbox Senior Vice President Don Mattrick did state that Project Natal would be compatible with every Xbox 360, but the cost is top secret..
The technology (see video below), allows users contolling games, movies, and anything else on their Xbox system with their body alone, and without touching any hardware.
If it's a real product and not just a marketing invention, it could also have important applications in the field of cybertherapy, in particular for neuro-motor rehabilitation. The advantages of this technology are quite clear: there is nothing to wear for the patient and it's possibile to use motivational gaming scenarios of all kinds.
I was really excited to be there, because I consider Frontiers the most interesting interaction design event in Italy.
Frontiers is organized and produced by Leandro Agrò and Matteo Penzo, who are also the founders of the Idearium community, the largest e-community on Interaction Design in Italy.
(and, last but not least: Frontiers is completely free of charge, only registration is required. This is great since this makes the event accessible to young students)
The company Violet (best known for the Nabaztag) has invented the "Violet Mirror", a RFID chip reader that can be connected to the PC. The RFID can be attached to any object and scripted to trigger applications and multimedia content automatically or communicate over the Internet.
This is a usage scenario described in the product's website:
"8:40 am – you’re getting ready to leave home. On your desk, next to your computer, a halo of light is quietly pulsating. You swiftly flash your car keys at this mysterious device. A voice speaks out: "today, rain 14°C". The voice continues: "you will get there in 15 minutes". Your computer screen displays an image from the webcam located along the route you’re planning to travel, while the voice reads out your horoscope for the day. At the same moment, your friends can see your social network profile update to "It’s 8:40, I’m leaving the house". At the office, your favourite colleague receives an email to say that you won’t be long. And finally, just as you walk through the door, your computer locks.
You personally "scripted" this morning’s scenario: you decided to give your car keys all these powers, because the time you pick them up signals the fact you’re soon going to leave the house.
What if you could obtain information, access services, communicate with the world, play or have fun just by showing things to a mirror, a Mir:ror which, as if by magic, could make all your everyday objects come alive, and connect them to the Internet’s endless wealth of possibilities?
Mir:ror is as simple to use as looking in the mirror - it gives access to information or triggers actions with disarming ease: simply place an object near to its surface. Mir:ror is a power conferred upon each of us to easily program the most ordinary of objects. The revolution of the Internet of Things suddenly becomes a simple, obvious, daily reality that’s within anyone’s reach."
Scientists from the Universities of York and Warwick have developed the first Virtual Reality system that allows users to see, hear, smell taste and even touch. The prototype will be presented at Pioneers 09', an EPSRC showcase event to be held at London's Olympia Conference Centre on March 4
If the prototype can really do what it promises, it can have widespread applications in education, business, medical visualization and cybertherapy.
Credit: Image courtesy of Engineering and Physical Sciences Research Council
this gestural-controlled display by G-Speak is the closest to Minority Report I have seen so far. The system consists of a DLP projector and is equipped with special gloves that incorporate reflective markers
Artist Pyocotan has developed “Noriko-san,” a sleep mask with an electronic scrolling display that communicates the wearer’s destination to fellow passengers.
A new product by NTT, called “Firmo,” allows users to communicate with electronic devices by touching them. A card-sized transmitter carried in the user’s pocket transmits data across the surface of the human body. When the user touches a device, the electric field is converted back into a data signal that can be read by the device.
For now, a set of 5 card-sized transmitters and 1 receiver goes for around 800,000 yen ($8,000), but NTT expects the price to come down when they begin mass production.
A neckband that translates thought into speech by picking up nerve signals has been used to demonstrate a "voiceless" phone call for the first time.
With careful training a person can send nerve signals to their vocal cords without making a sound. These signals are picked up by the neckband and relayed wirelessly to a computer that converts them into words spoken by a computerised voice.
Researchers at the University of Tokyo have created a smart video goggle system that should records everything the wearer looks at, recognizes and assigns names to objects that appear in the video, and creates an easily searchable database of the recorded footage.