Ok

By continuing your visit to this site, you accept the use of cookies. These ensure the smooth running of our services. Learn more.

Sep 18, 2006

Learning to perform a new movement with robotic assistance

Learning to perform a new movement with robotic assistance: comparison of haptic guidance and visual demonstration

By J Liu, SC Cramer and DJ Reinkensmeyer

Background: Mechanical guidance with a robotic device is a candidate technique for teaching people desired movement patterns during motor rehabilitation, surgery, and sports training, but it is unclear how effective this approach is as compared to visual demonstration alone. Further, little is known about motor learning and retention involved with either robot-mediated mechanical guidance or visual demonstration alone. Methods: Healthy subjects (n = 20) attempted to reproduce a novel three-dimensional path after practicing it with mechanical guidance from a robot. Subjects viewed their arm as the robot guided it, so this "haptic guidance" training condition provided both somatosensory and visual input. Learning was compared to reproducing the movement following only visual observation of the robot moving along the path, with the hand in the lap (the "visual demonstration" training condition). Retention was assessed periodically by instructing the subjects to reproduce the path without robotic demonstration. Results: Subjects improved in ability to reproduce the path following practice in the haptic guidance or visual demonstration training conditions, as evidenced by a 30–40% decrease in spatial error across 126 movement attempts in each condition. Performance gains were not significantly different between the two techniques, but there was a nearly significant trend for the visual demonstration condition to be better than the haptic guidance condition (p = 0.09). The 95% confidence interval of the mean difference between the techniques was at most 25% of the absolute error in the last cycle. When asked to reproduce the path repeatedly following either training condition, the subjects' performance degraded significantly over the course of a few trials. The tracing errors were not random, but instead were consistent with a systematic evolution toward another path, as if being drawn to an "attractor path". Conclusion: These results indicate that both forms of robotic demonstration can improve short-term performance of a novel desired path. The availability of both haptic and visual input during the haptic guidance condition did not significantly improve performance compared to visual input alone in the visual demonstration condition. Further, the motor system is inclined to repeat its previous mistakes following just a few movements without robotic demonstration, but these systematic errors can be reduced with periodic training.

Robots for ageing society

Via Pink Tentacle
 

Maid robot

The CIRT consortium, composed by Tokyo University and a group of 7 companies (Toyota, Olympus, Sega, Toppan Printing, Fujitsu, Matsushita, and Mitsubishi), has started a project to develop robotic assistants for Japan’s aging population.

The robots envisioned by the project should support the elderly with housework and serve as personal transportation capable of replacing the automobile.

The Ministry of Education, Culture, Sports, Science and Technology (MEXT) will be the major sponsor of the research, whose total cost is expected to be about 1 billion yen (US$9 million) per year.

Aug 02, 2006

The Huggable

Via Siggraph2006 Emerging Technology website

 

medium_huggable-bear.gif

 

The Huggable is a robotic pet developed by MIT researchers for therapy applications in children's hospitals and nursing homes, where pets are not always available. The robotic teddy has full-body sensate skin and smooth, quiet voice coil actuators that is able to relate to people through touch. Further features include "temperature, electric field, and force sensors which it uses to sense the interactions that people have with it. This information is then processed for its affective content, such as, for example, whether the Huggable is being petted, tickled, or patted; the bear then responds appropriately".

The Huggable has been unveiled at the Siggraph2006 conference in Boston. From the conference website:

Enhanced Life
Over the past few years, the Robotic Life Group at the MIT Media Lab has been developing "sensitive skin" and novel actuator technologies in addition to our artificial-intelligence research. The Huggable combines these technologies in a portable robotic platform that is specifically designed to leave the lab and move to healthcare applications.

Goals
The ultimate goal of this project is to evaluate the Huggable's usefulness as a therapy for those who have limited or no access to companion-animal therapy. In collaboration with nurses, doctors, and staff, the technology will soon be applied in pilot studies at hospitals and nursing homes. By combining Huggable's data-collection capabilities with its sensing and behavior, it may be possible to determine early onset of a person's behavior change or detect the onset of depression. The Huggable may also improve day-to-day life for those who may spend many hours in a nursing home alone staring out a window, and, like companion-animal therapy, it could increase their interaction with other people in the facility.

Innovations
The core technical innovation is the "sensitive skin" technology, which consists of temperature, electric-field, and force sensors all over the surface of the robot. Unlike other robotic applications where the sense of touch is concerned with manipulation or obstacle avoidance, the sense of touch in the Huggable is used to determine the affective content of the tactile interaction. The Huggable's algorithms can distinguish petting, tickling, scratching, slapping, and poking, among other types of tactile interactions. By combining the sense of touch with other sensors, the Huggable detects where a person is in relation to itself and responds with relational touch behaviors such as nuzzling.

Most robotic companions use geared DC motors, which are noisy and easily damaged. The Huggable uses custom voice-coil actuators, which provide soft, quiet, and smooth motion. Most importantly, if the Huggable encounters a person when it tries to move, there is no risk of injury to the person.

Another core technical innovation is the Huggable' combination of 802.11g networking with a robotic companion. This allows the Huggable to be much more than a fun, interactive robot. It can send live video and data about the person's interactions to the nursing staff. In this mode, the Huggable functions as a team member working with the nursing home or hospital staff and the patient or resident to promote the Huggable owner's overall health.

Vision
As poorly staffed nursing homes and hospitals become larger and more overcrowded, new methods must be invented to improve the daily lives of patients or residents. The Huggable is one of these technological innovations. Its ability to gather information and share it with the nursing staff can detect problems and report emergencies. The information can also be stored for later analysis by, for example, researchers who are studying pet therapy.

 

 

 

Jun 13, 2006

Robot with the human touch feels just like us


From: Times (UK)

A touch sensor developed to match the sensitivity of the human finger is set to herald the age of the robotic doctor.

Until now robots have been severely handicapped by their inability to feel objects with anything like the accuracy of their human creators. The very best are unable to beat the dexterity of the average six-year-old at tying a shoelace or building a house of cards.

But all that could change with the development by nanotechnologists of a device that can “feel” the shape of a coin down to the detail of the letters stamped on it. The ability to feel with at least the same degree of sensitivity as a human finger is crucial to the development of robots that can take on complicated tasks such as open heart surgery.


 

Read the full article

00:45 Posted in AI & robotics | Permalink | Comments (0) | Tags: robotics

Jun 05, 2006

HUMANOIDS 2006

medium_chess-whatto04.jpg

HUMANOIDS 2006 - Humanoid Companions

2006 IEEE-RAS International Conference on Humanoid Robots December 4-6, 2006, University of Genova, Genova, Italy.

From the conference website

The 2006 IEEE-RAS International Conference on Humanoid Robots will be held on December 4 to 6, 2006 in Genova, Italy. The conference series started in Boston in the year 2000, traveled through Tokyo (2001), Karlsruhe/Munich (2003), Santa Monica (2004), and Tsukuba (2005) and will dock in Genoa in 2006.

The conference theme, Humanoid Companions, addresses specifically aspects of human-humanoid mutual understanding and co-development.

Papers as well as suggestions for tutorials and workshops from academic and industrial communities and government agencies are solicited in all areas of humanoid robots. Topics of interest include, but are not limited to:


* Design and control of full-body humanoids
* Anthropomorphism in robotics (theories, materials, structure, behaviors)
* Interaction between life-science and robotics
* Human - humanoid interaction, collaboration and cohabitation
* Advanced components for humanoids (materials, actuators, portable energy storage, etc)
* New materials for safe interaction and physical growth
* Tools, components and platforms for collaborative research
* Perceptual and motor learning
* Humanoid platforms for robot applications (civil, industrial, clinical)
* Cognition, learning and development in humanoid systems
* Software and hardware architectures for humanoid implementation

Important Dates
* June 1st, 2006 - Proposals for Tutorials/Workshops
* June 15th , 2006 - Submission of full-length papers
* Sept. 1st , 2006 - Notification of Paper Acceptance
* October 15th, 2006 - Submission of final camera-ready papers
* November 1st 2006 - Deadline for advance registration

Paper Submission
Submitted papers MUST BE in Portable Document Format (PDF). NO OTHER FORMATS WILL BE ACCEPTED. Papers must be written in English. Six (6) camera-ready pages, including figures and references, are allowed for each paper. Up to two (2) additional pages are allowed for a charge of 80 euros for each additional page.
Papers over 8 pages will NOT be reviewed/accepted.

Detailed instructions for paper submissions and format can be found at here

Exhibitions
There will be an exhibition site at the conference and promoters are encouraged to display state-of-the art products and services in all areas of robotics and automation. Reservations for space and further information may be obtained from the Exhibits Chair and on the conference web site.

Video Submissions
Video submissions should present documentary-like report on a piece of valuable work, relevant to the humanoids community as a whole.
Video submissions should be in .avi or mpeg-4 format and should not exceed 5Mb.

INQUIRIES:
Please contact the General Co-Chairs and the Program Co-Chairs at humanoids06@listes.epfl.ch

ORGANIZATION:

General Co-Chairs:
Giulio Sandini, (U. Genoa, Italy)
Aude Billard, (EPFL, Switzerland)

Program Co-Chairs:
Jun-Ho Oh (KAIST, Korea)
Giorgio Metta (University of Genoa, Italy) Stefan Schaal (University of Southern California) Atsuo Takanishi (Waseda University)

Tutorials/Workshops Co-Chairs:
Rudiger Dillman (University of Karlsruhe) Alois Knoll (TUM, Germany)

Exhibition Co-Chairs:
Cecilia Laschi (Scuola Superiore S. Anna - Pisa, Italy) Matteo Brunnettini (U. Genoa, Italy)

Honorary Chairs:
George Bekey ((U. Genoa, Italy)USC, USA) Hirochika Inoue (JSPS, Japan) Friedrich Pfeiffer, (TU Munich, Germany)

Local Arrangements Co-Chairs:
Giorgio Cannata, (U. Genoa, Italy)
Rezia Molfino, (U. Genoa and SIRI, Italy)


Jan 20, 2006

Humanoids 2006 conference

2006 IEEE-RAS International Conference on Humanoid Robots December 4-6, 2006, University of Genova, Genova, Italy.

The 2006 IEEE-RAS International Conference on Humanoid Robots will be held on December 4 to 6, 2006 in Genova, Italy. The conference series started in Boston in the year 2000, traveled through Tokyo (2001), Karlsruhe/Munich (2003), Santa Monica (2004), and Tsukuba (2005) and will dock in Genoa in 2006.

The conference theme, Humanoid Companions, addresses specifically aspects of human-humanoid mutual understanding and co-development.

To facilitate the exchange of ideas in the diverse fields of humanoid technologies the structure will remain single-track with ample space allocated (both at the conference and in the proceedings) for poster presentations. The first day of the conference will be devoted to tutorials and workshops.

Papers as well as suggestions for tutorials and workshops from academic and industrial communities and government agencies are solicited in all areas of humanoid robots. Topics of interest include, but are not limited to:
* Design and control of full-body humanoids
* Anthropomorphism in robotics (theories, materials, structure, behaviors)
* Interaction between life-science and robotics
* Human – humanoid interaction, collaboration and cohabitation
* Advanced components for humanoids (materials, actuators, portable energy storage, etc)
* New materials for safe interaction and physical growth
* Tools, components and platforms for collaborative research
* Perceptual and motor learning
* Humanoid platforms for robot applications (civil, industrial, clinical)
* Cognition, learning and development in humanoid systems
* Software and hardware architectures for humanoid implementation

Important Dates
* June 1st, 2006 - Proposals for Tutorials/Workshops
* June 15th , 2006 - Submission of full-length papers
* Sept. 1st , 2006 - Notification of Paper Acceptance
* October 15th, 2006 - Submission of final camera-ready papers
* November 1st 2006 - Deadline for advance registration

Paper Submission
All papers must be submitted electronically in PDF format by June 1st, 2006. The maximum number of pages is limited to six, including figures. A maximum of 2 additional pages will be permitted at an extra page charge of € 80 per page. Detailed instructions for paper submissions and format can be found in the conference website.

http://www.humanoids2006.org/

Jan 03, 2006

Can robot demonstrate self-awareness?

Junichi Takeno and a team of researchers at Meiji University in Japan are developing a robot, which can recognize the difference between a mirror image of itself and another robot that looks just like it.
This so-called mirror image cognition is based on artificial nerve cell groups built into the robot's computer brain that give it the ability to recognize itself and acknowledge others.

According to Junichi Takeno and his co-workers, the ground-breaking technology could eventually lead to robots able to express emotions.

Read full story

Dec 22, 2005

Practicing medicine on robo-patients

Via RPTT
These days, it seems that robots are very busy delivering drugs in hospitals or replacing nurses, doctors and even surgeons. But robots also can replace human patients. For example, at McMaster University in Canada, medical students are using robo-patients to practice their clinical skills before they reach human patients. Their simulator lab training center features a $100,000 computerized, human-like robot that mimics bodily functions such as breathing or heartbeat. And there are other plans at McMaster and other universities to extend this kind of training program to all kinds of medical disciplines with a whole family of robo-patients. Read more to discover other human patient simulators.

Dec 06, 2005

Robo-patients Allow Medical Students To Practise Until Perfect

via Science Daily

Robotic, simulated patients are allowing students in the Michael G. DeGroote School of Medicine to practise clinical skills before they reach human patients. A simulator lab training centre set up by the anesthesia department allows students to experience the challenges of working in a hospital operating room in a setting that looks and functions as close as possible to the real thing...

Read full article

Uncanny valley

From Wikipedia

The Uncanny Valley is a principle of robotics concerning the emotional response of humans to robots and other non-human entities. It was theorized by Japanese roboticist Masahiro Mori in 1970. The principle states that as a robot is made more humanlike in its appearance and motion, the emotional response from a human being to robot will become increasingly positive and empathic, until a point is reached at which the response suddenly becomes strongly repulsive; as the appearance and motion are made to be indistinguishable to that of human being, the emotional response becomes positive once more and approaches human-human empathy levels.

Emotional response of human subjects is plotted against anthropomorphism of a robot, following Mori's results. The Uncanny Valley is the region of negative emotional response for robots that seem

Emotional response of human subjects is plotted against anthropomorphism of a robot, following Mori's results. The Uncanny Valley is the region of negative emotional response for robots that seem "almost human".

This gap of repulsive response aroused by a robot with appearance and motion between a "barely-human" and "fully human" entity is called the Uncanny Valley. The name harkens to the notion that a robot which is "almost human" will seem overly "strange" to a human being and thus will fail to evoke the requisite empathetic response required for productive human-robot interaction.

The phenomenon can be explained by the notion that if an entity is sufficiently non-humanlike, then the humanlike characteristics will tend to stand out and be noticed easily, generating empathy. On the other hand, if the entity is "almost human", then the non-human characteristics will be the ones that stand out, leading to a feeling of "strangeness" in the human viewer.

Another possibility is that infected individuals and corpses exhibit many visual anomalies similar to the ones we see with humanoid robots and so we react with the same alarm and revulsion. The reaction may in fact become worse with robots since there is no overt reason for it to occur; when we see a corpse we understand where our feelings come from. Behavioural anomalies too are indicative of illness, neurological conditions or mental dysfunction, and again evoke acutely negative emotions.

Some roboticists have heavily criticized the theory, arguing that Mori had no basis for the right part of his chart, as human-like robots are only now technically possible (and still only partially). David Hanson, a roboticist who developed a realistic robotic copy of his girlfriend's head, said the idea of the Uncanny Valley was "really pseudoscientific, but people treat it like it is science." Sara Kiesler, a human-robot interaction researcher at Carnegie Mellon University, questioned Uncanny Valley's scientific status, noting that "we have evidence that it’s true, and evidence that it’s not."

Oct 25, 2005

Aibo to fight obesity

Via Medical Informatics Insider

According to New Scientist magazine, researchers at the Massachusetts Institute of Technology plan to reprogram Sony's robot Aibo into a coach for dieters. The canine robot would be able to monitor daily food intake and exercise levels and encourage overweight individuals to stick to their diets. The dog would be connected by radio to the bathroom scales, a pedometer and a personal organizer in which the owner would note his daily food intake.

More to explore

MIT Media Lab
AIBO, Sony
UbiComp

 

 

Jul 18, 2005

Philips' iCat for studying human-robot interaction topics

Via Medgadget

Philips new invention iCat promises to advance the field of human-robot interaction. The robot can express a wide range of expressions, and elaborate the user's reactions to these expressions. The robot is about fourty cm tall and is equipped with a number of servos that control different components of the face, including mouth and head position. Moving these parts of the face, the robot can make many different facial expressions. Reseachers can observe the user's reaction to these expressions and investigate i.e. the perceived personality of the iCat during a game or task setting.


Philips researchers predict that the iCat could have applications in psychology research (social cognition) and in medicine, for example to help autistic children or stroke survivors.

More to explore

- the Philips Research Technologies web page



Apr 13, 2005

Rescue Robots

Disaster rescue is one of the most serious social issues which involves very large numbers of heterogeneous agents in an hostile environment.

Rescue Robotics is a newly emerging field dealing with systems that support first response units in disaster missions. Especially mobile robots can be highly valuable tools in urban rescue missions after catastrophes like earthquakes, bomb- or gas-explosions or daily incidents like fires and road accidents involving hazardous materials. The robots can be used to inspect collapsed structures, to assess the situation and to search and locate victims.

There are many engineering and scientific challenges in this domain. Rescue robots not only have to be designed for the harsh environmental conditions of disasters, but they also need advanced capabilities like intelligent behaviors to free them from constant supervision by operators.

The main goal of the RoboCupRescue project is to promote research and development in this socially significant domain at various levels involving multi-agent team work coordination, physical robotic agents for search and rescue, information infrastructures, personal digital assistants, a standard simulator and decision support systems, evaluation benchmarks for rescue strategies and robotic systems that are all integrated into a comprehensive systems in future.

More to explore

The web site of RescueRobots Freiburg, one of the leading lab in the RR field

Jan 01, 2005

Robocup Rescue

About Simulation League

The RoboCupRescue Simulation League competition is an international evaluation conference for the RoboCupRescue Simulation Project research.

The main purpose of the RoboCupRescue Simulation Project is to provide emergency decision support by integration of disaster information, prediction, planning, and human interface. A generic urban disaster simulation environment was constructed based on a computer network. Heterogeneous intelligent agents such as fire fighters, commanders, victims, volunteers, etc. conduct search and rescue activities in this virtual disaster world. Real-world interfaces such as helicopter images synchronizes the virtuality and the reality by sensing data. Mission-critical human interfaces such as PDA support disaster managers, disaster relief brigades, residents and volunteers decide their actions so as to minimize the disaster damage.

This problem introduces researchers advanced and interdisciplinary research themes. As AI/Robotics research, for example, behavior strategy (e.g. multi-agent planning, realtime/anytime planning, heterogeneity of agents, robust planning, mixed-initiative planning) is a challenging problem. For disaster researchers, RoboCupRescue works as a standard basis in order to develop practical comprehensive simulators adding necessary disaster modules.