Ok

By continuing your visit to this site, you accept the use of cookies. These ensure the smooth running of our services. Learn more.

Jul 14, 2007

RunBot

Via Medgadget

 

Researchers from Germany and the United Kingdom have developed a bipedal walking robot, capable of self-stabilizing via a highly-developed learning process.

From the study abstract:

In this study we present a planar biped robot, which uses the design principle of nested loops to combine the self-stabilizing properties of its biomechanical design with several levels of neuronal control. Specifically, we show how to adapt control by including online learning mechanisms based on simulated synaptic plasticity. This robot can walk with a high speed (>3.0 leg length/s), self-adapting to minor disturbances, and reacting in a robust way to abruptly induced gait changes. At the same time, it can learn walking on different terrains, requiring only few learning experiences. This study shows that the tight coupling of physical with neuronal control, guided by sensory feedback from the walking pattern itself, combined with synaptic learning may be a way forward to better understand and solve coordination problems in other complex motor tasks.
 

The paper: Adaptive, Fast Walking in a Biped Robot under Neuronal Control and Learning (Manoonpong P, Geng T, Kulvicius T, Porr B, Worgotter F (2007) Adaptive, Fast Walking in a Biped Robot under Neuronal Control and Learning. PLoS Comput Biol 3(7): e134)

Jul 11, 2007

Myomo e100 NeuroRobotic System

Via Medgadget 

US company Myomo has announced that the e100 NeuroRobotic System, a technology designed to help in the rehabilitation process of patients by "engaging and reinforcing both neurological and motor pathways,"  has received FDA clearance to market.

How it Works

  • Patient's brain is the controller: When a patient attempts movement during therapy, their muscles contract and electrical muscle activity signals fire

  • Non-invasive sensing: An EMG sensor sits on the skin's surface to detect and continuously monitor a person's residual electrical muscle activity

  • Proprietary system software: Advanced signal processing software filters and processes the user's EMG signal, and then forwards the data to a robotic device

  • Proportional assistance: Portable, wearable robotics use the person's EMG signal to assist with desired movement; power assistance is customized to patient ability with Myomo's real-time adjustable control unit.

 

 


Product page: Myomo e100 NeuroRobotic System ...

 

Using a Robot to Teach Human Social Skills

Via KurzweilAI

A humanoid robot designed to teach autistic children social skills has begun testing in British schools. Known as KASPAR (Kinesics and Synchronisation in Personal Assistant Robotics), the $4.33 million bot smiles, simulates surprise and sadness

Read full article

 

Jul 06, 2007

Epigenetic Robotics 2007 (Extended Deadline)

Via NeuroBot

5-7 November 2007, Piscataway, NJ, USA

Seventh International Conference on Epigenetic Robotics: Modeling Cognitive Development in Robotic Systems

http://www.epigenetic-robotics.org


Email: epirob07@epigenetic-robotics.org

Location:
Rutgers, The State University of New Jersey,
Piscataway, NJ, USA

*Extended* Submission Deadline: 1 August 2007

 

Using AI to produce 3D paintings

Via Emerging Technology Trends

Jason Green, CEO of Florida-based Medical Development International (MDI) announced that experts at his company have applied artificial intelligence to produce original, three-dimensional paintings.

From the press release:

 

Most computers store individual instructions as code with each instruction given a unique number; the simplest computers perform a handful of different instructions, while the more complex computers have several hundred to choose from.

Green took programming capability one-step further by producing thousands of images using a set color scheme and style from multiple images simultaneously created on multiple machines. The best of these images are then rendered at extremely high resolutions.

Green's "Virtual Van Gogh" takes High-Definition to an entirely different level. While the best High-Definition television (HDTV) currently available produces an image at 1980 x 1080 pixels, Green's program render's the painting's resolution to 7500 x 5000 taking more than 156 hours.

 

 

16:04 Posted in AI & robotics, Cyberart | Permalink | Comments (0) | Tags: cyberart

Jun 07, 2007

Kansei robot

Via Pink Tentacle 

Kansei robot --

Researchers at Meiji University have developed a robot face called "Kansei" that is capable of a wide range of emotional expressions. The robot is part of a program that aims at creating conscious and self-aware robots.

video

Apr 29, 2007

First DARPA Limb Prototype

From the DARPA press release

(via Medgadget

Proto1a.jpg

 

An international team led by the Johns Hopkins University Applied Physics Laboratory (APL) in Laurel, Md., has developed a prototype of the first fully integrated prosthetic arm that can be controlled naturally, provide sensory feedback and allows for eight degrees of freedom--a level of control far beyond the current state of the art for prosthetic limbs. Proto 1, developed for the Defense Advanced Research Projects Agency (DARPA) Revolutionizing Prosthetics Program, is a complete limb system that also includes a virtual environment used for patient training, clinical configuration, and to record limb movements and control signals during clinical investigations.

The DARPA prosthetics program is an ambitious effort to provide the most advanced medical and rehabilitative technologies for military personnel injured in the line of duty. Over the last year, the APL-led Revolutionizing Prosthetics 2009 (RP 2009) team has worked to develop a prosthetic arm that will restore significant function and sensory perception of the natural limb. Proto 1 and its virtual environment system were delivered to DARPA ahead of schedule, and Proto 1 was fitted for clinical evaluations conducted by team partners at the Rehabilitation Institute of Chicago (RIC) in January and February.

"This progress represents the first major step in a very challenging program that spans four years and involves more than 30 partners, including government agencies, universities, and private firms from the United States, Europe, and Canada," says APL's Stuart Harshbarger, who leads the program. "The development of this first prototype within the first year of this program is a remarkable accomplishment by a highly talented and motivated team and serves as validation that we will be able to implement DARPA's vision to provide, by 2009, a mechanical arm that closely mimics the properties and sensory perception of a biological limb."

 

Proto1.jpg

 

APL, which was responsible for much of the design and fabrication of Proto 1, and other team members are already hard at work on a second prototype, expected to be unveiled in late summer. It will have more than 25 degrees of freedom and the strength and speed of movement approaching the capabilities of the human limb, combined with more than 80 individual sensory elements for feedback of touch, temperature, and limb position.

"There is still significant work to be done to determine how best to control this number of degrees of freedom, and ultimately how to incorporate sensory feedback based on these sensory inputs within the human nervous system," Harshbarger says. "The APL team is already driving a virtual model of Proto 2 with data recorded during the clinical evaluation of Proto 1, and the team is working to identify a robust set of grasps that can be controlled by a second patient later this year."

Another exciting development is the functional demonstration of Injectable MyoElectric Sensor (IMES) devices--very small injectable or surgically implantable devices used to measure muscle activity at the source verses surface electrodes on the skin that were used during testing of the first prototype.

 

Apr 27, 2007

Evolution of visually guided behavior in artificial agents

Evolution of visually guided behavior in artificial agents.

Network. 2007 Mar;18(1):11-34

Authors: Boots B, Nundy S, Purves D

Recent work on brightness, color, and form has suggested that human visual percepts represent the probable sources of retinal images rather than stimulus features as such. Here we investigate the plausibility of this empirical concept of vision by allowing autonomous agents to evolve in virtual environments based solely on the relative success of their behavior. The responses of evolved agents to visual stimuli indicate that fitness improves as the neural network control systems gradually incorporate the statistical relationship between projected images and behavior appropriate to the sources of the inherently ambiguous images. These results: (1) demonstrate the merits of a wholly empirical strategy of animal vision as a means of contending with the inverse optics problem; (2) argue that the information incorporated into biological visual processing circuitry is the relationship between images and their probable sources; and (3) suggest why human percepts do not map neatly onto physical reality.

Apr 22, 2007

Seventh International Conference on Epigenetic Robotics

Seventh International Conference on Epigenetic Robotics: Modeling Cognitive Development in Robotic Systems

Call for Papers: Epigenetic Robotics 2007

5-7 November 2007, Piscataway, NJ, USA

Location: Rutgers, The State University of New Jersey, Piscataway, NJ, USA

Apr 11, 2007

MOBI

From Networked Performance

mobi.jpg



MOBI (Mobile Operating Bi-directional Interface), by Graham Smith, is a human sized telepresence robot that users remotely control to move through distant environments, see through its camera eye, talk through its speakers and hear via its microphone ear. Simultaneously a life sized image of themselves is projected onto the robots LCD face, creating a robotic avatar. MOBI allows people to "explore far away art shows, attend distant presentations and make public appearences from anywhere on earth, thus helping to reduce air travel and reduce global warming". MOBI is at DEAF 07.

Graham Smith is a leading expert in the fields of telepresence, virtual reality, videoconferencing and robotics. He has worked with leading Canadian high tech companies for more than 14 years, including Nortel, Vivid Effects, VPL, BNR and IMAX. Graham initiated and headed the Virtual Reality Artist Access Program at the world-renowned McLuhan Program at the University of Toronto, and has lectured internationally. He holds numerous patents in the field of telepresence and panoramic imaging, and was recognized in Macleans magazine as one of the top 100 Canadians to watch
.

Mar 16, 2007

Socially assistive robotics for post-stroke rehabilitation

Journal of NeuroEngineering and Rehabilitation

Maja J Matari

Background: Although there is a great deal of success in rehabilitative robotics applied to patient recovery post stroke, most of the research to date has dealt with providing physical assistance. However, new rehabilitation studies support the theory that not all therapy need be hands-on. We describe a new area, called socially assistive robotics, that focuses on non-contact patient/user assistance. We demonstrate the approach with an implemented and tested post-stroke recovery robot and discuss its potential for effectiveness. Results: We describe a pilot study involving an autonomous assistive mobile robot that aids stroke patient rehabilitation by providing monitoring, encouragement, and reminders. The robot navigates autonomously, monitors the patient's arm activity, and helps the patient remember to follow a rehabilitation program. We also show preliminary results from a follow-up study that focused on the role of robot physical embodiment in a rehabilitation context. Conclusion: We outline and discuss future experimental designs and factors toward the development of effective socially assistive post-stroke rehabilitation robots.

Robot/computer-assisted motivating systems for personalized, home-based, stroke rehabilitation

 
Michelle J Johnson, Xin Feng, Laura M Johnson and Jack M Winters
 
Background: There is a need to improve semi-autonomous stroke therapy in home environments often characterized by low supervision of clinical experts and low extrinsic motivation. Our distributed device approach to this problem consists of an integrated suite of low-cost robotic/computer-assistive technologies driven by a novel universal access software framework called UniTherapy. Our design strategy for personalizing the therapy, providing extrinsic motivation and outcome assessment is presented and evaluated. Methods: Three studies were conducted to evaluate the potential of the suite. A conventional force-reflecting joystick, a modified joystick therapy platform (TheraJoy), and a steering wheel platform (TheraDrive) were tested separately with the UniTherapy software. Stroke subjects with hemiparesis and able-bodied subjects completed tracking activities with the devices in different positions. We quantify motor performance across subject groups and across device platforms and muscle activation across devices at two positions in the arm workspace. Results: Trends in the assessment metrics were consistent across devices with able-bodied and high functioning strokes subjects being significantly more accurate and quicker in their motor performance than low functioning subjects. Muscle activation patterns were different for shoulder and elbow across different devices and locations. Conclusion: The Robot/CAMR suite has potential for stroke rehabilitation. By manipulating hardware and software variables, we can create personalized therapy environments that engage patients, address their therapy need, and track their progress. A larger longitudinal study is still needed to evaluate these systems in under-supervised environments such as the home.

Feb 25, 2007

Grand challenges proposed by the U.K. Computing Research Committee

Re-blogged from KurzweilAI.net 

 

Grand challenges proposed by the U.K. Computing Research Committee include a project to unify cognitive science, artificial intelligence, and robotics.

One sign of success would be a robot capable of functioning at the level of a 2- to 5-year-old child. Another milestone could be a robot capable of autonomously helping a disabled person around a house without explicit preprogramming about its environment.

Other challenge is intended to create more dependable computers and associated software systems, which oversee the bulk of the world's financial transactions, regulate life-saving instruments, and manage the delivery of products.

 


Read Original Article>>

Socially assistive robotics for post-stroke rehabilitation

Socially assistive robotics for post-stroke rehabilitation

By Maja J Mataric', Jon Eriksson, David J Feil-Seifer and Carolee J Winstein, Journal of NeuroEngineering and Rehabilitation

Background: Although there is a great deal of success in rehabilitative robotics applied to patient recovery post-stoke, most of the rehabilitation research to date has dealt with providing physical assistance. However, new studies support the theory that not all therapy need be hands-on. We describe a new area, called socially assistive robotics, that focuses on non-contact patient/user assistance. We demonstrate the approach with an implemented and tested post-stroke recovery robot and discuss its potential for effectiveness. Results: We describe a pilot study involving an autonomous assistive mobile robot that aids stoke patient rehabilitation by providing monitoring, encouragement, and reminders. The robot navigates autonomously, monitors the patient's arm activity, and helps the patient remember to follow a rehabilitation program. We also show preliminary results from a follow-up study that studied the role of robot physical embodiment in a rehabilitation context. Conclusions: Future experimental design and factors that will be considered in order to develop effective socially assistive post-stroke rehabilitation robot are outlined and discussed.

 

Jan 21, 2007

Cognitive robotics

Via Mind Hacks

 

Memoirs of a Postgrad has an interesting analysis of cognitive robotics - the science of developing 'cognitive agents'

 

Link 

 

Jan 15, 2007

Call for Articles for: Encyclopedia of Artificial Intelligence

Via Neurodudes

From the call for articles 

 

Editors:  Juan R. Rabuñal, Julián Dorado & Alejandro Pazos

 

Nature has always been a source of inspiration for science problem solving. Areas such as Pharmacy, Physics or Aeronautics use biological concepts to reach beyond their current limits.

As far as Computing Science and - more specifically, Artificial Intelligence (AI) - is concerned, the use of biological concepts is highly reliable for achieving good results. At the early stages of AI (1950s), Artificial Neural Networks (ANN) - quite successful as classification and pattern recognition systems - were developed by using the structure of the nervous system as a basis. After these systems, biology has inspired the development of other techniques - among which the evolutionary systems are the most promising ones when dealing with new problems that use a vast amount of data - such as Biomedical Computing or weather forecasting.

Both the techniques based in cell or natural organisms performance, as well as those based on evolutionary theories, have a wide success record when applied to real problems. These types of techniques currently represent a very active area of research, as not only a high number of companies use them, but also many related high level scientific congresses are being held annually on these techniques.

Coverage
To meet this need, currently we are in the process of editing the "Encyclopedia of Artificial Intelligence " that will provide comprehensive coverage and definitions of the most important issues, concepts, trends and technologies in Artificial Intelligence. This important new publication will be distributed worldwide among academic and professional institutions and will be instrumental in providing researchers, scholars, students and professionals with access to the latest knowledge related to Artificial Intelligence techniques.

To ensure that this publication has the most current and relevant coverage of all topics related to Artificial Intelligence, we are asking sholars well-known for their particular area of research, to contribute short articles of 1,500-3,500 words on any of the following topics.

Jan 10, 2007

Researchers Use Wikipedia To Make Computers Smarter

Via KurzweilAI.net

Using Wikipedia, Technion researchers have developed a way to give computers knowledge of the world to help them "think smarter," making common sense and broad-based connections between topics just as the human mind does.

 

Link

Jan 07, 2007

Scientists have designed and built an immersive table tennis simulation that allows a human to compete against a computer

Via KurzweilAI.net

Scientists have designed and built an immersive table tennis simulation that allows a human to compete against a computer..

Link

Dec 30, 2006

Evolved Virtual Creatures

Via Suicide Bots

From the Evolved Virtual Creatures website

This video shows results from a research project involving simulated Darwinian evolutions of virtual block creatures. A population of several hundred creatures is created within a supercomputer, and each creature is tested for their ability to perform a given task, such the ability to swim in a simulated water environment. Those that are most successful survive, and their virtual genes containing coded instructions for their growth, are copied, combined, and mutated to make offspring for a new population. The new creatures are again tested, and some may be improvements on their parents. As this cycle of variation and selection continues, creatures with more and more successful behaviors can emerge.

The creatures shown are results from many independent simulations in which they were selected for swimming, walking, jumping, following, and competing for control of a green cube.

Download movie from the Internet Archive

swimming

Dec 22, 2006

Recent trends in robot-assisted therapy environments

Recent trends in robot-assisted therapy environments to improve real-life functional performance of affected limbs.

J Neuroengineering Rehabil. 2006 Dec 18;3(1):29

Authors: Johnson MJ

ABSTRACT: Upper and lower limb robotic tools for neuro-rehabilitation are effective in reducing motor impairment but they are limited in their ability to improve real world function. There is a need to improve functional outcomes after robot-assisted therapy. Improvements in the effectiveness of these environments may be achieved by incorporating into their design and control strategies important elements key to inducing motor learning and cerebral plasticity such as mass-practice, feedback, task-engagement, and complex problem solving. This special issue presents nine articles. The novel strategies covered in this issue encourage more natural movements through the use of virtual reality and real objects and faster motor learning through the use of error feedback to guide acquisition of natural movements that are salient to real activities. In addition, several articles describe novel systems and techniques that use of custom and commercial games combined with new low-cost robot systems and a humanoid robot to embody the supervisory presence of the therapy as possible solutions to exercise compliance in under-supervised environments such as the home.