Ok

By continuing your visit to this site, you accept the use of cookies. These ensure the smooth running of our services. Learn more.

Jan 18, 2005

Simulating Human Touch

FROM THE PRESENCE-L LISTSERV:

From InformIT.com

Haptics: The Technology of Simulating Human Touch

Date: Jan 14, 2005
By Laurie Rowell.

When haptics research — that is, the technology of touch — moves from theory into hardware and software, it concentrates on two areas: tactile human-computer interfaces and devices that can mimic human physical touch. In both cases, that means focusing on artificial hands. Here you can delve into futuristic projects on simulating touch.


At a lunch table some time back, I listened to several of my colleagues eagerly describing the robots that would make their lives easier. Typical was the servo arm mounted on a sliding rod in the laundry room. It plucked dirty clothes from the hamper one at a time. Using information from the bar code—which new laws would insist be sewn into every label—the waldo would sort these items into a top, middle, or lower nylon sack.

As soon as a sack was full of, say, permanent press or delicates, the hand would tip the contents into the washing machine. In this way, garments could be shepherded through the entire cycle until the shirts were hung on a nearby rack, socks were matched and pulled together, and pajamas were patted smooth and stacked on the counter.

Sounds like a great idea, right? I mean, how hard could be for a robotic hand to feel its way around a collar until it connects with a label? As it turns out, that's pretty tricky. In fact, one of the things that keeps us from those robotic servants that we feel sure are our due and virtual reality that lets us ski without risking a broken leg is our limited knowledge of touch.

We understand quite a bit about how humans see and hear, and much of that information has been tested and refined by our interaction with computers over the past several years. But if we are going to get VR that really lets us practice our parasailing, the reality that we know has to be mapped and synthesized and presented to our touch so that it is effectively "fooled." And if we want androids that can sort the laundry, they have to be able to mimic the human tactile interface.

That leads us to the study of haptics, the technology of touch.

Research that explores the tactile intersection of humans and computers can be pretty theoretical, particularly when it veers into the realm of psychophysics. Psychophysics is the branch of experimental psychology that deals with the physical environment and the reactive perception of that environment.
Researchers in the field try, through experimentation, to determine parameters such as sensory thresholds for signal perception, to determine perceptual boundaries.

But once haptics research moves from theory into hardware and software, it concentrates on two primary areas of endeavor:
tactile human-computer interfaces and devices that can mimic human physical touch, most specifically and most commonly artificial hands.

Substitute Hands

A lot of information can be conveyed by the human hand.
Watching The Quiet Man the other night, I was struck by the scene in which the priest, played by Ward Bond, insists that Victor M shake hands with John Wayne. Angrily, M complies, but clearly the pressure he exerts far exceeds the requirements of the gesture. Both men are visibly "not wincing" as the Duke drawls, "I never could stand a flabby handshake myself."

When they release and back away from each other, the audience is left flexing its collective fingers in response.

In this particular exchange, complex social messages are presented to audience members, who recognize the indicators of pressure, position, and grip without being involved in the tactile cycle. Expecting mechanical hands to do all that ours can is a tall order, so researchers have been inching that way for a long time by making them do just some of the things ours can.

Teleoperators, for example, are distance-controlled robotic arms and hands that were first built to touch things too hot for humans to handle—specifically, radioactive substances in the Manhattan Project in the 1950s.

While operators had to be protected from radiation by a protective wall, the radioactive material itself had to be shaped with careful precision. A remote-controlled servo arm seemed like the perfect solution.

Accordingly, two identical mechanical arms were stationed on either side of a 1m-thick quartz window. The joints of one were connected to the joints of the other by means of pulleys and steel ribbons. In other words, whatever an operator made the arm do on one side of the barrier was echoed by the device on the other side.

These were effective and useful instruments, allowing the operator to move toxic substances from a remote location, but they were "dumb." They offered no electronic control and were not linked to a computer.

Modern researchers working on this problem would be concentrating now on devices that could "feel" the density, shape, and character of the materials that were perhaps miles away, seen only on a computer screen. This kind of teleoperator depends on a haptic interface and requires some understanding of how touch works.

Worlds in Your Hand

To build a mechanical eye—say, a camera—you need to study optics. To build a receiver, you need to understand acoustics and how these work with the human ear. Similarly, if you expect to build an artificial hand—or even a finger that perceives tactile sensation—you need to understand skin biomechanics.

At the MIT Touch Lab, where numerous projects in the realm of haptics are running at any given time, one project seeks to mimic the skin sensitivity of the primate fingertip as closely as possible, concentrating on having it react to touch as the human finger would.

The research is painstaking and exacting, involving, for example, precise friction and compressibility measurements of the fingerpads of human subjects. Fingertip dents and bends in response to edges, corners, and surfaces have provided additional data. At the same time, magnetic resonance imaging
(MRI) and high-frequency ultrasound show how skin behaves in response to these stimuli on the physical plane.

Not satisfied with the close-ups that they could get from available devices, the team developed a new tool, the Ultrasound Backscatter Microscope (UBM), which shows the papillary ridges of the fingertip and the layers of skin underneath in far greater detail than an MRI.

As researchers test reactions to surfaces from human and monkey participants, the data they gather is mapped and recorded to emerging 2D and 3D fingertip models. At this MIT project and elsewhere, human and robot tactile sensing is simulated by means of an array of mechanosensors presented in some medium that can be pushed, pressed, or bent.

In the Realm of Illusion

Touch might well be the most basic of human senses, its complex messages easily understood and analyzed even by the crib and pacifier set. But what sets it apart from other senses is its dual communication conduit, allowing us to send information by the same route through which we perceive it. In other words, those same fingers that acknowledge your receipt of a handshake send data on their own.

In one project a few years back, Peter J. Berkelman and Ralph L.
Hollis began stretching reality in all sorts of bizarre ways. Not only could humans using their device touch things that weren't there, but they could reach into a three-dimensional landscape and, guided by the images appearing on a computer screen, move those objects around.

This was all done with a device built at the lab based on Lorentz force magnetic levitation (Lorenz force is the force exerted on a charged particle in an electromagnetic field). The design depended upon a magnetic tool levitated or suspended over a surface by means of electromagnetic coils.

To understand the design of this maglev device, imagine a mixing bowl with a joystick bar in the middle. Now imagine that the knob of the joystick floats barely above the stick, with six degrees of freedom. Coils, magnet assemblies, and sensor assemblies fill the basin, while a rubber ring makes the top comfortable for a human operator to rest a wrist. This whole business is set in the top of a desk-high metal box that holds the power supplies, amplifiers, and control processors.

Looking at objects on a computer screen, a human being could take hold of the levitated tool and try to manipulate the objects as they were displayed. Force-feedback data from the tool itself provided tactile information for holding, turning, and moving the virtual objects.

What might not be obvious from this description is that this model offered a marvel of economy, replacing the bulk of previous systems with an input device that had only one moving part.
Holding the tool—or perhaps pushing at it with a finger—the operator could "feel" the cube seen on the computer screen:
edges, corners, ridges, and flat surfaces. With practice, operators could use the feedback data to maneuver a virtual peg into a virtual hole with unnerving reliability.

Notice something here: An operator could receive tactile impressions of a virtual object projected on a screen. In other words, our perception of reality was starting to be seriously messed around with here.

HUI, Not GUI

Some of the most interesting work in understanding touch has been done to compensate for hearing, visual, or tactile impairments.

At Stanford, the TalkingGlove was designed to support individuals with hearing limitations. It recognized American Sign Language finger spelling to generate text on a screen or synthesize speech. This device applied a neural-net algorithm to map the movement of the human hand to an instrumented glove to produce a digital output. It was so successful that it spawned a commercial application in the Virtex Cyberglove, which was later purchased by Immersion and became simply the Cyberglove.
Current uses include virtual reality biomechanics and animation.

At Lund University in Sweden, work is being done in providing haptic interfaces for those with impaired vision. Visually impaired computer users have long had access to Braille displays or devices that provide synthesized speech, but these just give text, not graphics, something that can be pretty frustrating for those working in a visual medium like the Web. Haptic interfaces offer an alternative, allowing the user to feel shapes and textures that could approximate a graphical user interface.

At Stanford, this took shape in the 1990s as the "Moose," an experimental haptic mouse that gave new meaning to the terms drag and drop, allowing the user to feel a pull to suggest one and then feel the sudden loss of mass to signify the other. As users approached the edge of a window, they could feel the groove; a check box repelled or attracted, depending on whether it was checked. Some of the time, experimental speech synthesizers were used to "read" the text.

Such research has led to subsequent development of commercial haptic devices, such as the Logitech iFeel Mouse, offering the promise of new avenues into virtual words for the visually impaired.

Where Is This Taking Us?

How is all this research doing toward getting us to virtual realities and actual robot design? Immersion and other companies offer a variety of VR gadgets emerging from the study of haptics, but genuine simulated humans are pretty far out on the horizons.
What we have is a number of researchers around the globe working on perfecting robotic hands, trying to make them not only hold things securely, but also send and receive messages as our own do. Here is a representative sampling:

The BarrettHand
[]
BH8-262: Originally developed by Barrett Technology for NASA but now available commercially, it offers a three-fingered grasper with four degrees of freedom, embedded intelligence, and the ability to hold on to any geometric shape from any angle.

The Anatomically Correct Testbed (ACT) Hand
[]: A project at Carnegie Mellon's Robotics Institute, this is an ambitious effort to create a synthetic human hand for several purposes. These include having the hand function as a teleoperator or prosthetic, as an investigative tool for examining complex neural control of human hand movement, and as a model for surgeons working on damaged human hands. Still in its early stages, the project has created an actuated index finger that mimics human muscle behavior.

Cyberhand []: This collaboration of researchers and developers from Italy, Spain, Germany, and Denmark proposes to create a prosthetic hand that connects to remaining nerve tissue. It will use one set of electrodes to record and translate motor signals from the brain, and a second to pick up and conduct sensory signals from the artificial hand to nerves of the arm for transport through regular channels to the brain.

Research does not produce the products we'll be seeing in common use during the next few years. It produces their predecessors. But many of the scientists in these labs later create marketable devices. Keep an eye on these guys; they are the ones responsible for the world we'll be living in, the one with bionic replacement parts, robotic housekeepers, and gym equipment that will let us fly through virtual skies.

The comments are closed.