Ok

By continuing your visit to this site, you accept the use of cookies. These ensure the smooth running of our services. Learn more.

Aug 01, 2006

Oribotics

via textually.org

bright-with-phone.jpg

Oribotics - by Matthew Gardiner - is the fusion of origami and technology, specifically 'bot' technology, such as robots, or intelligent computer agents known as bots. The system was designed to create an intimate connection between the audience and the bots; a cross between gardening, messaging a friend, and commanding a robot. It was developed during an Australian Network for Artists and Technology (ANAT) artists lab; achieved with Processing as an authoring tool, and connected a mobile phone via a USB cable.

"Ori-botics is a joining of two complex fields of study. Ori comes from the Japanese verb Oru literally meaning 'to fold'. Origami the Japanese word for paper folding, comes from the same root. Oribotics is origami that is controlled by robot technology; paper that will fold and unfold on command. An Oribot by definition is a folding bot. By this definition, anything that uses robotics/botics and folding together is an oribot. This includes some industrial applications already in existance, such as Miura's folds taken to space, and also includes my two latest works. Orimattic, and Oribotics." 

11:38 Posted in AI & robotics, Cyberart | Permalink | Comments (0) | Tags: cyberart

Jul 24, 2006

Robot Doppelgänger

Hiroshi Ishiguro, director of the Intelligent Robotics Lab at Osaka University in Japan, has created a robot clone of himself. The robot, dubbed “Geminoid HI-1”, looks and moves exactly like him, and sometimes takes his place in meetings and classes.

medium_Geminoid1.jpg


Link to Wired article on Ishiguro's android double.

See also the Geminoid videos 

Jul 19, 2006

BACS project


BACS (Bayesian Approach to Cognitive Systems), is an Integrated Project under the 6th Framework Program of the European Commission which has been allocated EUR 7.5 million in funding.
 
The BACS project brings together researchers and commercial companies working on artificial perception systems potentially capable of dealing with complex tasks in everyday settings.
 
From the project's website:
 
Contemporary robots and other cognitive artifacts are not yet ready to autonomously operate in complex real world environments. One of the major reasons for this failure in creating cognitive situated systems is the difficulty in the handling of incomplete knowledge and uncertainty.
 
bacs_001

 

By taking up inspiration from the brains of mammals, including humans, the BACS project will investigate and apply Bayesian models and approaches in order to develop artificial cognitive systems that can carry out complex tasks in real world environments. The Bayesian approach will be used to model different levels of brain function within a coherent framework, from neural functions up to complex behaviors. The Bayesian models will be validated and adapted as necessary according to neuro-physiological data from rats and humans and through psychophysical experiments on humans. The Bayesian approach will also be used to develop four artificial cognitive systems concerned with (i) autonomous navigation, (ii) multi-modal perception and reconstruction of the environment, (iii) semantic facial motion tracking, and (iv) human body motion recognition and behavior analysis. The conducted research shall result in a consistent Bayesian framework offering enhanced tools for probabilistic reasoning in complex real world situations. The performance will be demonstrated through its applications to driver assistant systems and 3D mapping, both very complex real world tasks.

 

Jul 17, 2006

Computers learn common sense

Via The Engineer, July 11, 2006

BBN Technologies has been awarded $5.5 million in funding from the Defense Advanced Research Projects Agency (DARPA) for the first phase of "Integrated Learner," which will learn plans or processes after being shown a single example.

The goal is to combine specialised domain knowledge with common sense knowledge to create a reasoning system that learns as well as a person and can be applied to a variety of complex tasks. Such a system will significantly expand the kinds of tasks that a computer can learn.

Read the full article 

Jun 18, 2006

Rheo Knee

Via KurzweilAI.net

 

MIT's Media Lab researchers have developed a prosthetic "Rheo Knee" that uses AI to replicate the workings of a biological human joint and "bio-hybrids," surgical implants that allow an amputee to control an artificial leg by thinking..

 

Jun 13, 2006

Robot with the human touch feels just like us


From: Times (UK)

A touch sensor developed to match the sensitivity of the human finger is set to herald the age of the robotic doctor.

Until now robots have been severely handicapped by their inability to feel objects with anything like the accuracy of their human creators. The very best are unable to beat the dexterity of the average six-year-old at tying a shoelace or building a house of cards.

But all that could change with the development by nanotechnologists of a device that can “feel” the shape of a coin down to the detail of the letters stamped on it. The ability to feel with at least the same degree of sensitivity as a human finger is crucial to the development of robots that can take on complicated tasks such as open heart surgery.


 

Read the full article

00:45 Posted in AI & robotics | Permalink | Comments (0) | Tags: robotics

Jun 05, 2006

HUMANOIDS 2006

medium_chess-whatto04.jpg

HUMANOIDS 2006 - Humanoid Companions

2006 IEEE-RAS International Conference on Humanoid Robots December 4-6, 2006, University of Genova, Genova, Italy.

From the conference website

The 2006 IEEE-RAS International Conference on Humanoid Robots will be held on December 4 to 6, 2006 in Genova, Italy. The conference series started in Boston in the year 2000, traveled through Tokyo (2001), Karlsruhe/Munich (2003), Santa Monica (2004), and Tsukuba (2005) and will dock in Genoa in 2006.

The conference theme, Humanoid Companions, addresses specifically aspects of human-humanoid mutual understanding and co-development.

Papers as well as suggestions for tutorials and workshops from academic and industrial communities and government agencies are solicited in all areas of humanoid robots. Topics of interest include, but are not limited to:


* Design and control of full-body humanoids
* Anthropomorphism in robotics (theories, materials, structure, behaviors)
* Interaction between life-science and robotics
* Human - humanoid interaction, collaboration and cohabitation
* Advanced components for humanoids (materials, actuators, portable energy storage, etc)
* New materials for safe interaction and physical growth
* Tools, components and platforms for collaborative research
* Perceptual and motor learning
* Humanoid platforms for robot applications (civil, industrial, clinical)
* Cognition, learning and development in humanoid systems
* Software and hardware architectures for humanoid implementation

Important Dates
* June 1st, 2006 - Proposals for Tutorials/Workshops
* June 15th , 2006 - Submission of full-length papers
* Sept. 1st , 2006 - Notification of Paper Acceptance
* October 15th, 2006 - Submission of final camera-ready papers
* November 1st 2006 - Deadline for advance registration

Paper Submission
Submitted papers MUST BE in Portable Document Format (PDF). NO OTHER FORMATS WILL BE ACCEPTED. Papers must be written in English. Six (6) camera-ready pages, including figures and references, are allowed for each paper. Up to two (2) additional pages are allowed for a charge of 80 euros for each additional page.
Papers over 8 pages will NOT be reviewed/accepted.

Detailed instructions for paper submissions and format can be found at here

Exhibitions
There will be an exhibition site at the conference and promoters are encouraged to display state-of-the art products and services in all areas of robotics and automation. Reservations for space and further information may be obtained from the Exhibits Chair and on the conference web site.

Video Submissions
Video submissions should present documentary-like report on a piece of valuable work, relevant to the humanoids community as a whole.
Video submissions should be in .avi or mpeg-4 format and should not exceed 5Mb.

INQUIRIES:
Please contact the General Co-Chairs and the Program Co-Chairs at humanoids06@listes.epfl.ch

ORGANIZATION:

General Co-Chairs:
Giulio Sandini, (U. Genoa, Italy)
Aude Billard, (EPFL, Switzerland)

Program Co-Chairs:
Jun-Ho Oh (KAIST, Korea)
Giorgio Metta (University of Genoa, Italy) Stefan Schaal (University of Southern California) Atsuo Takanishi (Waseda University)

Tutorials/Workshops Co-Chairs:
Rudiger Dillman (University of Karlsruhe) Alois Knoll (TUM, Germany)

Exhibition Co-Chairs:
Cecilia Laschi (Scuola Superiore S. Anna - Pisa, Italy) Matteo Brunnettini (U. Genoa, Italy)

Honorary Chairs:
George Bekey ((U. Genoa, Italy)USC, USA) Hirochika Inoue (JSPS, Japan) Friedrich Pfeiffer, (TU Munich, Germany)

Local Arrangements Co-Chairs:
Giorgio Cannata, (U. Genoa, Italy)
Rezia Molfino, (U. Genoa and SIRI, Italy)


May 20, 2006

The NEW TIES project

Via Cognews 

With funding from the European Commission's Future and Emerging Technologies (FET) initiative of the IST programme, five European research institutes are collaborating on the NEW TIES project to create a thoroughly 21st-century brave new world - one populated by randomly generated software beings, capable of developing their own language and culture.

From the project's website

The project is concerned with emergence and complexity in socially-inspired artificial systems. We will study large systems consisting of an environment and an inhabitant population. The main goal of the project is to realize an evolving artificial society capable of exploring the environment and developing its own image of this environment and the society through cooperation and interaction. We will work with virtual grid worlds and will set up environments that are sufficiently complex and demanding  that communication and cooperation are necessary to adapt to the given tasks. The population's weaponry to develop advanced skills bottom-up consists of individual learning, evolutionary learning, and social learning. One of the main innovations of this project is social learning interpreted as passing knowledge explicitly via a language to others in the same generation. This has a synergetic effect on the learning processes and enables the society to rapidly develop an "understanding" of the world collectively. If the learning process stabilises, the collective must have formed an appropriate world map. Then we will probe the collective mind to learn how the agents perceive the environment, including themselves, and what skills and procedures they have developed to adapt successfully. This could yield new knowledge and surprising perspectives about the environment and the survival task. The project represents a significant scale-up beyond the state-of-the-art in two dimensions: the inner complexity of inhabitants and the size of the population. To achieve and explore highly complex organisms and behaviours, very large populations will be studied. This will make the system at the macro level complex enough to allow significant behaviours (cultures etc) to emerge in separate parts of the system and to interact. To enable this we will set up a large distributed computing infrastructure, and a shared platform to allow very large scale experiments in a p2p fashion.

May 17, 2006

Robot of the year award

Via Pink Tentacle (source: Yomiuri Shimbun)

The Ministry of Economy, Trade and Industry (METI) plans to establish an annual Robot of the Year Award to recognize outstanding robots developed and put into practical use each year. In addition to the grand prize, prizes will be awarded to robots in the following categories: (1) industrial robots, such as those used in painting and welding, (2) service robots, such as those used in cleaning and security, (3) robots for use in special environments, such as rescue robots, and (4) robots developed by small to medium sized venture firms.

May 13, 2006

Female android developed by japanese team

Japanese scientists have unveiled the most human-looking robot yet - a "female" android named Repliee Q1Expo.

 

 

Watch the video of the android woman

May 10, 2006

PaPeRo

Via Pink Tentacle 

NEC has recently announced a technology that allows PaPeRo household robot to connect with a variety of personal devices. The technology provides PaPeRo with a digital avatar that “follows” you to the device of your choice, where it appears on the screen and interacts with you.

 

VIDEO of PaPeRo QuickTime, 1.24 MB

May 07, 2006

Shake hands with the avatar

Via WMMNA 

The system developed by the project U-Tsu-Shi-O-Mi:The Virtual Humanoid You Can Reach, allows the synchronization between a humanoid robot and a virtual avatar, allowing users to shake hands with the avatar.

The project will be presented at the Emerging Technologies section of the upcoming Siggraph 2006.

 
According to developers, the system is a first step towards digital-media content that merges humanoid robots and mixed reality.
 

14:09 Posted in AI & robotics | Permalink | Comments (0)

Cognitive Computing

Nicholas Nova quotes an interview of Dharmendra Modha, chair of the Almaden Institute at IBM’s San Jose and IBM’s leader for cognitive computing, who claims a language shift from the previsouly so-called “Artifical Intelligence” to “Cognitive Computing”:

Q: Why use the term “cognitive computing” rather than the better-known “artificial intelligence”?

A: The rough idea is to use the brain as a metaphor for the computer. The mind is a collection of cognitive processes—perception, language, memory, and eventually intelligence and consciousness. The mind arises from the brain. The brain is a machine—it’s biological hardware.

Cognitive computing is less about engineering the mind than it is the reverse engineering of the brain. We’d like to get close to the algorithm that the human brain [itself has]. If a program is not biologically feasible, it’s not consistent with the brain.

May 02, 2006

BabyBot

From the BabyBot project website:

 

 

The Babybot is the LIRA-Lab humanoid robot. The latest version has eighteen degrees of freedom distributed along the head, arm, torso, and hand. The head and hand were custom designed at the lab. The arm is an off-the-shelf small PUMA manipulator and it is mounted on a rotating torso. The Babybot's sensory system is composed of a pair of cameras with space-variant resolution, two microphones each mounted inside an external ear, a set of three gyroscopes mimicking the human vestibular system, positional encoders at each joint, a torque/forse sensor at the wrist and tactile sensors at the fingertips and the palm.

The one you see in the picture above is the latest realization of the Babybot, a project started in 1996 at LIRA-Lab. The hardware itself went through many revisions so there's not much remaining of the mechanics of the first Babybot beside the PUMA arm.

Our scientific goal is that of uncovering the mechanisms of the functioning of the brain by building physical models of the neural control and cognitive structures. In our intendment physical model are embodied artificial systems that freely interact in a not too unconstrained environment. Also, our approach derives from studies of human sensorimotor and cognitive development with the aim of investigating if a developmental approach to building intelligent systems may offer new insight on aspects of human behavior and new tools for the implementation of complex, artificial systems.

Examples of the behaviors we implemented include (but not only) the control of eye movements such as vergence, saccades, and vestibulo-ocular reflex. We've been working on the integration of different sensory modalities as for example vestibular and visual cues, or acoustic perception with vision. We implemented reaching behavior as a means to physically interact with the external environment to discover about the properties of objects.

May 01, 2006

"Shrug-detecting" software recognizes your disinterest

Via Engadget

A team of computer vision researchers at the University of Illinois has developed "Shrug-detecting" technology that recognizes the level of confusion or disinterest. The system works by allowing a webcam-equipped computer to pick up on the "relative fast movement of the shoulder toward the face". Researchers hope that the Shrug-detector will soon be complemented with real-time blink, hand-motion and facial expression detectors.

More technical information about the Shrug-detector can be found in this paper (PDF)

Apr 25, 2006

Humanlike robots

Via the Presence Listserv

Japan boasts the most advanced humanoid robots in the world, represented by Honda's Asimo and other bipedal machines. They are expected to eventually pitch in as the workforce shrinks amid the dwindling and aging population. But why build a robot with pigmented silicone skin, smooth gestures and even makeup? To Repliee's creator, Hiroshi Ishiguro, the answer is simple: "Android science."

Read the full story from Scientific American



Apr 20, 2006

An android for enhancing social skills and emotion recognition in people with autism

An android for enhancing social skills and emotion recognition in people with autism.

IEEE Trans Neural Syst Rehabil Eng. 2005 Dec;13(4):507-15

Authors: Pioggia G, Igliozzi R, Ferro M, Ahluwalia A, Muratori F, De Rossi D

It is well documented that the processing of social and emotional information is impaired in people with autism. Recent studies have shown that individuals, particularly those with high functioning autism, can learn to cope with common social situations if they are made to enact possible scenarios they may encounter in real life during therapy. The main aim of this work is to describe an interactive life-like facial display (FACE) and a supporting therapeutic protocol that will enable us to verify if the system can help children with autism to learn, identify, interpret, and use emotional information and extend these skills in a socially appropriate, flexible, and adaptive context. The therapeutic setup consists of a specially equipped room in which the subject, under the supervision of a therapist, can interact with FACE. The android display and associated control system has automatic facial tracking, expression recognition, and eye tracking. The treatment scheme is based on a series of therapist-guided sessions in which a patient communicates with FACE through an interactive console. Preliminary data regarding the exposure to FACE of two children are reported.

Apr 18, 2006

The Guardian: Now the bionic man is real

Via VRoot

From the article: The 1970s gave us the six-million-dollar man. Thirty years and quite a bit of inflation later we have the six-billion-dollar human: not a physical cyborg as such, instead an umbrella term for the latest developments in the growing field of technology for human enhancement.

Helping the blind to see again, being able to carry enormous loads without the prospect of backache and a prosthetic robotic hand that works (almost) like a real one were some of the ideas presented at a recent meeting of engineers, physicists, biologists and computer scientists organised by the American Association of Anatomists...

Read full article 

WMMNA: Children 'bond with robots'

Via We Make Money Not Art

Researchers from Sony Intelligence Dynamics Laboratories and a nursery school in San Diego are conducting an experiment that focuses on how children can develop emotions toward robots. Results of this research could be used to develop smarter and friendlier humanoid robots, with a huge commercial potential.

 

Apr 12, 2006

Computer simulations of the mind

Scientific American Mind has a free online article about computer simulations of the mind. The article analyzes how recent technological advances are narrowing the gap between human brains and circuitry.