Ok

By continuing your visit to this site, you accept the use of cookies. These ensure the smooth running of our services. Learn more.

Dec 18, 2006

Researchers demonstrate EEG control of humanoid robot

Via ScienceDaily

University of Washington researchers have developed a brain-computer interface that allows humans to control the actions of robots through commands generated by analysis of EEG signals


Link 

Nov 28, 2006

A Smarter Computer to Pick Stocks

Via KurzweilAI.net

Wall Street is adopting nonlinear decision making processes akin to how a brain operates, including neural networks, and genetic algorithms, and other advanced computer-science techniques.

Link

Nov 22, 2006

Black Starfish

Via the Neurophilosopher

Josh Bongard, Victor Zykov and Hod Limpton of Cornell University’s Computational Synthesis Laboratory have designed and built the Black Starfish, a four-legged robot which “automatically synthesizes a predictive model of its own topology (where and how its body parts are connected) through limited yet self-directed interaction with its environment, and then uses this model to synthesize successful new locomotive behavior before and after damage.”

Nov 11, 2006

Computer- and robot-aided head surgery

Computer- and robot-aided head surgery.

Acta Neurochir Suppl. 2006;98:51-61

Authors: Wörn H

In this paper new methods and devices for computer and robot based head surgery are presented. A computer based planning system for CMF-surgery allows the surgeon to plan complex trajectories on the head of the patient for operations where bone segments were cut out and shifted. Different registration methods have been developed and tested. A surgical robot system for bone cutting on the head has been developed and evaluated at the patient in the operating theatre. In future, laser cutting of bones with a robot will be seen as a new powerful method for robot based surgery. A 3D augmented reality system will assist the surgeon in the future by augmenting virtual anatomical structure into the situs.

Nov 06, 2006

Iris recognition technology for mobile phones

Re-blogged from Pink Tentacle

Iris recognition techology for cellphones --

Oki Electric announced the development of iris recognition technology for camera-equipped mobile phones. Unlike Oki’s previous iris recognition technology that relies on infrared cameras for the iris scan, the new technology uses ordinary cellphone cameras. With plans to make the technology commercially available in March 2007, Oki hopes to boost the security of cellphone payment systems. According to Oki, any camera-equipped cellphone or PDA can perform iris recognition once the special software is installed. Identification accuracy is said to be high, with only 1 in 100,000 scans resulting in error, and the system can tell the difference between flesh-and-blood eyes and photographs.

Nov 01, 2006

HAL

Via Engadget

HAL (short for Hybrid Assistive Limb) is a robotic suite designed "to expand and improve physical capabilities of human being".

The system, a brainchild of Yoshiyuki Sankai, engineering professor at Tsukuba University, is getting ready for mass production, Engadget reports. The  robotic suite could be used in applications such as "walking assistance and rehabilitation, nursing, factory work and disaster relief."  

HAL is originally developed to help elderly or disabled people walk around with their own legs and HAL-3 achieved the primary goal in 2000.

In 2005, the latest model HAL-5 was given upper body limbs as well as weight saving and more compact POWER units, longer life battery and much smaller control unit and spectacularly designed outer shells.

HAL is a robot suit which can expand and improve physical capabilities of human being. By wearing HAL-5 and you can hold up to 40 kg load by arms and can increase the maximum weight of leg press from 100 kg to 180 kg.

 

Read more at Engadget

Oct 30, 2006

Vision-body link tested in robot experiments

Re-blogged from KurzweilAI.net

"Embodied cognition" experiments involving real and simulated robots suggest that the relationship between physical movement and sensory input could be crucial to developing more intelligent machines...

Read the full article

Oct 27, 2006

PaPeRo Robot Childcare In Japan

Via Technovelgy 

NEC and NTT have jointly produced PaPeRo (short for Partner-type Personal Robot), the latest of a series of domestic robots. PaPeRo uses a camera in each eye to navigate and has image recognition capabilities to track and identify individual children. Further, it is equipped with a mobile phone that allows parents to control him at distance, as well as to talk to children directly or with text messages

 

Vision-body link tested in robot experiments

Via New Scientist Tech

Tests involving real and simulated robots suggest the relationship between physical movement and sensory input could be crucial to create smarter machines...

Read the full story

Oct 11, 2006

Robotic Whiskers Can Sense Three-Dimensional Environment

Re-blogged from Robots.net

rat whiskers

© 2006 Northwestern University 

Two Northwestern University engineers have developed an array of robotic whiskers that sense in two dimensions, mimicking the capabilities of mammalian whiskers. The bending moment, or torque, at the whisker base is then used to calculate the three-dimensional features of solid objects.

Read the full story here

21:34 Posted in AI & robotics | Permalink | Comments (0) | Tags: robotics

Oct 06, 2006

fMRI-compatible rehabilitation hand device

fMRI-compatible rehabilitation hand device

Journal of Neuroengineering and Rehabilitation (October 2006)  

Authors: Azadeh Khanicheh, Andrew Muto, Christina Triantafyllou, Brian Weinberg, Loukas Astrakas, Aria Tzika and Constantinos Mavroidis

Background: Functional magnetic resonance imaging (fMRI) has been widely used in studying human brain functions and neurorehabilitation. In order to develop complex and well-controlled fMRI paradigms, interfaces that can precisely control and measure output force and kinematics of the movements in human subjects are needed. Optimized state-of-the-art fMRI methods, combined with magnetic resonance (MR) compatible robotic devices for rehabilitation, can assist therapists to quantify, monitor, and improve physical rehabilitation. To achieve this goal, robotic or mechatronic devices with actuators and sensors need to be introduced into an MR environment. The common standard mechanical parts can not be used in MR environment and MR compatibility has been a tough hurdle for device developers. Methods: This paper presents the design, fabrication and preliminary testing of a novel, one degree of freedom, MR compatible, computer controlled, variable resistance hand device that may be used in brain MR imaging during hand grip rehabilitation. We named the device MR_CHIROD (Magnetic Resonance Compatible Smart Hand Interfaced Rehabilitation Device). A novel feature of the device is the use of Electro-Rheological Fluids (ERFs) to achieve tunable and controllable resistive force generation. ERFs are fluids that experience dramatic changes in rheological properties, such as viscosity or yield stress, in the presence of an electric field. The device consists of four major subsystems: a) an ERF based resistive element; b) a gearbox; c) two handles and d) two sensors, one optical encoder and one force sensor, to measure the patient induced motion and force. The smart hand device is designed to resist up to 50% of the maximum level of gripping force of a human hand and be controlled in real time. Results: Laboratory tests of the device indicate that it was able to meet its design objective to resist up to approximately 50% of the maximum handgrip force. The detailed compatibility tests demonstrated that there is neither an effect from the MR environment on the ERF properties and performance of the sensors, nor significant degradation on MR images by the introduction of the MR_CHIROD in the MR scanner. Conclusions: The MR compatible hand device was built to aid in the study of brain function during generation of controllable and tunable force during handgrip exercising. The device was shown to be MR compatible. To the best of our knowledge, this is the first system that utilizes ERF in MR environment.

Oct 02, 2006

Japan to invest US$17.4 million in robotics research

Via Pink Tentacle

Asimo

Japan’s Ministry of Economy, Trade and Industry (METI) will invest over 2 billion yen (US$17.4 million) to support the development of intelligent robots that rely on their own decision-making skills in the workplace.

The objective of METI’s robot budget is to support the development of key artificial intelligence technology for robots over the next 5 years, with the goal of introducing intelligent robots to the market by 2015.


Sep 18, 2006

Learning to perform a new movement with robotic assistance

Learning to perform a new movement with robotic assistance: comparison of haptic guidance and visual demonstration

By J Liu, SC Cramer and DJ Reinkensmeyer

Background: Mechanical guidance with a robotic device is a candidate technique for teaching people desired movement patterns during motor rehabilitation, surgery, and sports training, but it is unclear how effective this approach is as compared to visual demonstration alone. Further, little is known about motor learning and retention involved with either robot-mediated mechanical guidance or visual demonstration alone. Methods: Healthy subjects (n = 20) attempted to reproduce a novel three-dimensional path after practicing it with mechanical guidance from a robot. Subjects viewed their arm as the robot guided it, so this "haptic guidance" training condition provided both somatosensory and visual input. Learning was compared to reproducing the movement following only visual observation of the robot moving along the path, with the hand in the lap (the "visual demonstration" training condition). Retention was assessed periodically by instructing the subjects to reproduce the path without robotic demonstration. Results: Subjects improved in ability to reproduce the path following practice in the haptic guidance or visual demonstration training conditions, as evidenced by a 30–40% decrease in spatial error across 126 movement attempts in each condition. Performance gains were not significantly different between the two techniques, but there was a nearly significant trend for the visual demonstration condition to be better than the haptic guidance condition (p = 0.09). The 95% confidence interval of the mean difference between the techniques was at most 25% of the absolute error in the last cycle. When asked to reproduce the path repeatedly following either training condition, the subjects' performance degraded significantly over the course of a few trials. The tracing errors were not random, but instead were consistent with a systematic evolution toward another path, as if being drawn to an "attractor path". Conclusion: These results indicate that both forms of robotic demonstration can improve short-term performance of a novel desired path. The availability of both haptic and visual input during the haptic guidance condition did not significantly improve performance compared to visual input alone in the visual demonstration condition. Further, the motor system is inclined to repeat its previous mistakes following just a few movements without robotic demonstration, but these systematic errors can be reduced with periodic training.

Robots for ageing society

Via Pink Tentacle
 

Maid robot

The CIRT consortium, composed by Tokyo University and a group of 7 companies (Toyota, Olympus, Sega, Toppan Printing, Fujitsu, Matsushita, and Mitsubishi), has started a project to develop robotic assistants for Japan’s aging population.

The robots envisioned by the project should support the elderly with housework and serve as personal transportation capable of replacing the automobile.

The Ministry of Education, Culture, Sports, Science and Technology (MEXT) will be the major sponsor of the research, whose total cost is expected to be about 1 billion yen (US$9 million) per year.

Aug 08, 2006

The Talented Mr. Ripley

Via Thinking Meat 

Ripley is a robot designed by Deb Roy of the Cognitive Machines Group at MIT's Media Lab.

This robot was designed for learning about the environment by moving and touching objects in it. The underlying theoretical framework is the "Grounded Situation Model". In this approach, developed by Deb Roy and his colleague Nikolaos Mavridis, "the robot updates beliefs about its physical environment and body, based on a mixture of linguistic, visual and proprioceptive evidence. It can answer basic questions about the present or past and also perform actions through verbal interaction".

This story on NPR reports about the project, and includes an interview with Deb Roy.

From the MIT's Media Lab website

We have constructed a 7 degree-of-freedom robot, Ripley, to investigate connections between natural language semantics, perception, and action. Our goal is to enable Ripley to perform collaborative manipulation tasks mediated by natural spoken dialogue. Key issues include representation and learning of spatial language, object and temporal reference, and physical actions / verbs. Furthermore, a "Grounded Situation Model" representation has been designed for Ripley, as well as associated processes, and a cognitive architecture was implemented through numerous intercommunicating modules. 

medium_ripley.jpg


Links to Ripley video clips (from the MIT Media Lab website):

 

Ripley imagines and remembers [high resolution (12M) | low resolution (440K)]

Ripley tracks faces [high resolution (6M) | low resolution (280K)]

Ripley imagines objects [high resolution (13.5M) | low resolution (826k)]

Ripley grasping objects [high resolution (201M) | low resolution (23M)]

Ripley changing perspectives [.mov, 17M)]

Training HMM model to pick up [.mov, 844K)]

HMM model generating pick up [.mov, 791K)]

 

Aug 05, 2006

e-CIRCUS

 
medium_ecircus.jpg
 

The project e-Circus (Education through Characters with emotional Intelligence and Role-playing Capabilities that Understand Social interaction) aims to develop synthetic characters that interact with pupils in a virtual school, to support social and emotional learning in the real classroom. This will be achieved through virtual role-play with synthetic characters that establish credible and empathic relations with the learners.

The project consortium, which is funded under the EU 6th Framework Program, includes researchers from computer science, education and psychology from the UK, Portugal, Italy and Germany. Teachers and pupils will be included in the development of the software as well as a framework for using it in the classroom context. The e-Circus software will be tested in schools in the UK and Germany in 2007, evaluating not only the acceptance of the application among teachers and pupils but also whether the approach, as an innovative part of the curriculum, actually helps to reduce bullying in schools.

Aug 03, 2006

Virtual bots teach each other

From New Scientist Tech 

medium_learningbots.jpg

 

"Robots that teach one another new words through interaction with their surroundings have been demonstrated by UK researchers. The robots, created by Angelo Cangelosi and colleagues at Plymouth University, UK, currently exist only as computer simulations. But the researchers say their novel method of communication could someday help real-life robots cooperate when faced with a new challenge. They could also help linguists understand how human languages develop, they say..."

Continue to read the full article on New Scientist 

Watch the video 

Sixth International Conference on Epigenetic Robotics

Via Human Technology

Sixth International Conference on Epigenetic Robotics: Modeling Cognitive Development in Robotic Systems

Dates: 20-22 September 2006
Location: Hopital de la Salpêtrière, Paris, France

 

 

From the conference website:

In the past 5 years, the Epigenetic Robotics annual workshop has established itself as a unique place where original research combining developmental sciences, neuroscience, biology, and cognitive robotics and artificial intelligence is being presented.

Epigenetic systems, either natural or artificial, share a prolonged developmental process through which varied and complex cognitive and perceptual structures emerge as a result of the interaction of an embodied system with a physical and social environment.

Epigenetic robotics includes the two-fold goal of understanding biological systems by the interdisciplinary integration between social and engineering sciences and, simultaneously, that of enabling robots and artificial systems to develop skills for any particular environment instead of programming them for solving particular goals for the environment in which they happen to reside.

Psychological theory and empirical evidence is being used to inform epigenetic robotic models, and these models should be used as theoretical tools to make experimental predictions in developmental psychology.

Aug 02, 2006

The Huggable

Via Siggraph2006 Emerging Technology website

 

medium_huggable-bear.gif

 

The Huggable is a robotic pet developed by MIT researchers for therapy applications in children's hospitals and nursing homes, where pets are not always available. The robotic teddy has full-body sensate skin and smooth, quiet voice coil actuators that is able to relate to people through touch. Further features include "temperature, electric field, and force sensors which it uses to sense the interactions that people have with it. This information is then processed for its affective content, such as, for example, whether the Huggable is being petted, tickled, or patted; the bear then responds appropriately".

The Huggable has been unveiled at the Siggraph2006 conference in Boston. From the conference website:

Enhanced Life
Over the past few years, the Robotic Life Group at the MIT Media Lab has been developing "sensitive skin" and novel actuator technologies in addition to our artificial-intelligence research. The Huggable combines these technologies in a portable robotic platform that is specifically designed to leave the lab and move to healthcare applications.

Goals
The ultimate goal of this project is to evaluate the Huggable's usefulness as a therapy for those who have limited or no access to companion-animal therapy. In collaboration with nurses, doctors, and staff, the technology will soon be applied in pilot studies at hospitals and nursing homes. By combining Huggable's data-collection capabilities with its sensing and behavior, it may be possible to determine early onset of a person's behavior change or detect the onset of depression. The Huggable may also improve day-to-day life for those who may spend many hours in a nursing home alone staring out a window, and, like companion-animal therapy, it could increase their interaction with other people in the facility.

Innovations
The core technical innovation is the "sensitive skin" technology, which consists of temperature, electric-field, and force sensors all over the surface of the robot. Unlike other robotic applications where the sense of touch is concerned with manipulation or obstacle avoidance, the sense of touch in the Huggable is used to determine the affective content of the tactile interaction. The Huggable's algorithms can distinguish petting, tickling, scratching, slapping, and poking, among other types of tactile interactions. By combining the sense of touch with other sensors, the Huggable detects where a person is in relation to itself and responds with relational touch behaviors such as nuzzling.

Most robotic companions use geared DC motors, which are noisy and easily damaged. The Huggable uses custom voice-coil actuators, which provide soft, quiet, and smooth motion. Most importantly, if the Huggable encounters a person when it tries to move, there is no risk of injury to the person.

Another core technical innovation is the Huggable' combination of 802.11g networking with a robotic companion. This allows the Huggable to be much more than a fun, interactive robot. It can send live video and data about the person's interactions to the nursing staff. In this mode, the Huggable functions as a team member working with the nursing home or hospital staff and the patient or resident to promote the Huggable owner's overall health.

Vision
As poorly staffed nursing homes and hospitals become larger and more overcrowded, new methods must be invented to improve the daily lives of patients or residents. The Huggable is one of these technological innovations. Its ability to gather information and share it with the nursing staff can detect problems and report emergencies. The information can also be stored for later analysis by, for example, researchers who are studying pet therapy.

 

 

 

Aug 01, 2006

Computer's schizophrenia diagnosis inspired by the brain

Via New Scientist 

University of California at San Francisco researchers may have created a computerized diagnostic tool utilizing MRI based technology for determining whether someone has schizophrenia. From New Scientist
"Raymond Deicken at the University of California at San Francisco and colleagues have been studying the amino acid Nacetylaspartate (NAA). They found that levels of NAA in the thalamus region of the brain are lower in people with schizophrenia than in those without.

To find out whether software could diagnose the condition from NAA levels, the team used a technique based on magnetic resonance imaging to measure NAA levels at a number of points within the thalamus of 18 people, half of whom had been diagnosed with schizophrenia. Antony Browne of the University of Surrey, UK, then analysed these measurements using an artificial neural network, a program that processes information by learning to recognise patterns in large amounts of data in a similar way to neurons in the brain.

Browne trained his network on the measurements of 17 of the volunteers to teach it which of the data sets corresponded to schizophrenia and which did not. He then asked the program to diagnose the status of the remaining volunteer, based on their NAA measurements. He ran the experiment 18 times, each time withholding a different person's measurements. The program diagnosed the patients with 100 per cent accuracy."

 

Technorati Profile