Ok

By continuing your visit to this site, you accept the use of cookies. These ensure the smooth running of our services. Learn more.

Jan 18, 2005

Simulating Human Touch

FROM THE PRESENCE-L LISTSERV:

From InformIT.com

Haptics: The Technology of Simulating Human Touch

Date: Jan 14, 2005
By Laurie Rowell.

When haptics research — that is, the technology of touch — moves from theory into hardware and software, it concentrates on two areas: tactile human-computer interfaces and devices that can mimic human physical touch. In both cases, that means focusing on artificial hands. Here you can delve into futuristic projects on simulating touch.


At a lunch table some time back, I listened to several of my colleagues eagerly describing the robots that would make their lives easier. Typical was the servo arm mounted on a sliding rod in the laundry room. It plucked dirty clothes from the hamper one at a time. Using information from the bar code—which new laws would insist be sewn into every label—the waldo would sort these items into a top, middle, or lower nylon sack.

As soon as a sack was full of, say, permanent press or delicates, the hand would tip the contents into the washing machine. In this way, garments could be shepherded through the entire cycle until the shirts were hung on a nearby rack, socks were matched and pulled together, and pajamas were patted smooth and stacked on the counter.

Sounds like a great idea, right? I mean, how hard could be for a robotic hand to feel its way around a collar until it connects with a label? As it turns out, that's pretty tricky. In fact, one of the things that keeps us from those robotic servants that we feel sure are our due and virtual reality that lets us ski without risking a broken leg is our limited knowledge of touch.

We understand quite a bit about how humans see and hear, and much of that information has been tested and refined by our interaction with computers over the past several years. But if we are going to get VR that really lets us practice our parasailing, the reality that we know has to be mapped and synthesized and presented to our touch so that it is effectively "fooled." And if we want androids that can sort the laundry, they have to be able to mimic the human tactile interface.

That leads us to the study of haptics, the technology of touch.

Research that explores the tactile intersection of humans and computers can be pretty theoretical, particularly when it veers into the realm of psychophysics. Psychophysics is the branch of experimental psychology that deals with the physical environment and the reactive perception of that environment.
Researchers in the field try, through experimentation, to determine parameters such as sensory thresholds for signal perception, to determine perceptual boundaries.

But once haptics research moves from theory into hardware and software, it concentrates on two primary areas of endeavor:
tactile human-computer interfaces and devices that can mimic human physical touch, most specifically and most commonly artificial hands.

Substitute Hands

A lot of information can be conveyed by the human hand.
Watching The Quiet Man the other night, I was struck by the scene in which the priest, played by Ward Bond, insists that Victor M shake hands with John Wayne. Angrily, M complies, but clearly the pressure he exerts far exceeds the requirements of the gesture. Both men are visibly "not wincing" as the Duke drawls, "I never could stand a flabby handshake myself."

When they release and back away from each other, the audience is left flexing its collective fingers in response.

In this particular exchange, complex social messages are presented to audience members, who recognize the indicators of pressure, position, and grip without being involved in the tactile cycle. Expecting mechanical hands to do all that ours can is a tall order, so researchers have been inching that way for a long time by making them do just some of the things ours can.

Teleoperators, for example, are distance-controlled robotic arms and hands that were first built to touch things too hot for humans to handle—specifically, radioactive substances in the Manhattan Project in the 1950s.

While operators had to be protected from radiation by a protective wall, the radioactive material itself had to be shaped with careful precision. A remote-controlled servo arm seemed like the perfect solution.

Accordingly, two identical mechanical arms were stationed on either side of a 1m-thick quartz window. The joints of one were connected to the joints of the other by means of pulleys and steel ribbons. In other words, whatever an operator made the arm do on one side of the barrier was echoed by the device on the other side.

These were effective and useful instruments, allowing the operator to move toxic substances from a remote location, but they were "dumb." They offered no electronic control and were not linked to a computer.

Modern researchers working on this problem would be concentrating now on devices that could "feel" the density, shape, and character of the materials that were perhaps miles away, seen only on a computer screen. This kind of teleoperator depends on a haptic interface and requires some understanding of how touch works.

Worlds in Your Hand

To build a mechanical eye—say, a camera—you need to study optics. To build a receiver, you need to understand acoustics and how these work with the human ear. Similarly, if you expect to build an artificial hand—or even a finger that perceives tactile sensation—you need to understand skin biomechanics.

At the MIT Touch Lab, where numerous projects in the realm of haptics are running at any given time, one project seeks to mimic the skin sensitivity of the primate fingertip as closely as possible, concentrating on having it react to touch as the human finger would.

The research is painstaking and exacting, involving, for example, precise friction and compressibility measurements of the fingerpads of human subjects. Fingertip dents and bends in response to edges, corners, and surfaces have provided additional data. At the same time, magnetic resonance imaging
(MRI) and high-frequency ultrasound show how skin behaves in response to these stimuli on the physical plane.

Not satisfied with the close-ups that they could get from available devices, the team developed a new tool, the Ultrasound Backscatter Microscope (UBM), which shows the papillary ridges of the fingertip and the layers of skin underneath in far greater detail than an MRI.

As researchers test reactions to surfaces from human and monkey participants, the data they gather is mapped and recorded to emerging 2D and 3D fingertip models. At this MIT project and elsewhere, human and robot tactile sensing is simulated by means of an array of mechanosensors presented in some medium that can be pushed, pressed, or bent.

In the Realm of Illusion

Touch might well be the most basic of human senses, its complex messages easily understood and analyzed even by the crib and pacifier set. But what sets it apart from other senses is its dual communication conduit, allowing us to send information by the same route through which we perceive it. In other words, those same fingers that acknowledge your receipt of a handshake send data on their own.

In one project a few years back, Peter J. Berkelman and Ralph L.
Hollis began stretching reality in all sorts of bizarre ways. Not only could humans using their device touch things that weren't there, but they could reach into a three-dimensional landscape and, guided by the images appearing on a computer screen, move those objects around.

This was all done with a device built at the lab based on Lorentz force magnetic levitation (Lorenz force is the force exerted on a charged particle in an electromagnetic field). The design depended upon a magnetic tool levitated or suspended over a surface by means of electromagnetic coils.

To understand the design of this maglev device, imagine a mixing bowl with a joystick bar in the middle. Now imagine that the knob of the joystick floats barely above the stick, with six degrees of freedom. Coils, magnet assemblies, and sensor assemblies fill the basin, while a rubber ring makes the top comfortable for a human operator to rest a wrist. This whole business is set in the top of a desk-high metal box that holds the power supplies, amplifiers, and control processors.

Looking at objects on a computer screen, a human being could take hold of the levitated tool and try to manipulate the objects as they were displayed. Force-feedback data from the tool itself provided tactile information for holding, turning, and moving the virtual objects.

What might not be obvious from this description is that this model offered a marvel of economy, replacing the bulk of previous systems with an input device that had only one moving part.
Holding the tool—or perhaps pushing at it with a finger—the operator could "feel" the cube seen on the computer screen:
edges, corners, ridges, and flat surfaces. With practice, operators could use the feedback data to maneuver a virtual peg into a virtual hole with unnerving reliability.

Notice something here: An operator could receive tactile impressions of a virtual object projected on a screen. In other words, our perception of reality was starting to be seriously messed around with here.

HUI, Not GUI

Some of the most interesting work in understanding touch has been done to compensate for hearing, visual, or tactile impairments.

At Stanford, the TalkingGlove was designed to support individuals with hearing limitations. It recognized American Sign Language finger spelling to generate text on a screen or synthesize speech. This device applied a neural-net algorithm to map the movement of the human hand to an instrumented glove to produce a digital output. It was so successful that it spawned a commercial application in the Virtex Cyberglove, which was later purchased by Immersion and became simply the Cyberglove.
Current uses include virtual reality biomechanics and animation.

At Lund University in Sweden, work is being done in providing haptic interfaces for those with impaired vision. Visually impaired computer users have long had access to Braille displays or devices that provide synthesized speech, but these just give text, not graphics, something that can be pretty frustrating for those working in a visual medium like the Web. Haptic interfaces offer an alternative, allowing the user to feel shapes and textures that could approximate a graphical user interface.

At Stanford, this took shape in the 1990s as the "Moose," an experimental haptic mouse that gave new meaning to the terms drag and drop, allowing the user to feel a pull to suggest one and then feel the sudden loss of mass to signify the other. As users approached the edge of a window, they could feel the groove; a check box repelled or attracted, depending on whether it was checked. Some of the time, experimental speech synthesizers were used to "read" the text.

Such research has led to subsequent development of commercial haptic devices, such as the Logitech iFeel Mouse, offering the promise of new avenues into virtual words for the visually impaired.

Where Is This Taking Us?

How is all this research doing toward getting us to virtual realities and actual robot design? Immersion and other companies offer a variety of VR gadgets emerging from the study of haptics, but genuine simulated humans are pretty far out on the horizons.
What we have is a number of researchers around the globe working on perfecting robotic hands, trying to make them not only hold things securely, but also send and receive messages as our own do. Here is a representative sampling:

The BarrettHand
[]
BH8-262: Originally developed by Barrett Technology for NASA but now available commercially, it offers a three-fingered grasper with four degrees of freedom, embedded intelligence, and the ability to hold on to any geometric shape from any angle.

The Anatomically Correct Testbed (ACT) Hand
[]: A project at Carnegie Mellon's Robotics Institute, this is an ambitious effort to create a synthetic human hand for several purposes. These include having the hand function as a teleoperator or prosthetic, as an investigative tool for examining complex neural control of human hand movement, and as a model for surgeons working on damaged human hands. Still in its early stages, the project has created an actuated index finger that mimics human muscle behavior.

Cyberhand []: This collaboration of researchers and developers from Italy, Spain, Germany, and Denmark proposes to create a prosthetic hand that connects to remaining nerve tissue. It will use one set of electrodes to record and translate motor signals from the brain, and a second to pick up and conduct sensory signals from the artificial hand to nerves of the arm for transport through regular channels to the brain.

Research does not produce the products we'll be seeing in common use during the next few years. It produces their predecessors. But many of the scientists in these labs later create marketable devices. Keep an eye on these guys; they are the ones responsible for the world we'll be living in, the one with bionic replacement parts, robotic housekeepers, and gym equipment that will let us fly through virtual skies.

Jan 10, 2005

Experience Design: Erik Davis' vision

Experience Design And the Design of Experience

by Erik Davis

This piece appeard in Arcadia: Writings on Theology and Technology (Australia, 2001)


There is no creation ex nihilo. We always work from pre-existing material, both literal substances (wood, a language, the resonance of strings and reeds) and the existing cultural organization of those materials within history, tradition, and contemporary networks of influence. So as we survey the expanding and converging landscape of electronic, virtual, and immersive production, we might ask ourselves: what material is being worked here? Is it simply new organizations of photons, sound waves, and haptic cues? Or does the "holistic" fusion of different media and the construction of more immersive technologies actually suggest another, perhaps more fundamental material?

I’d wager that the new material is indeed rather fundamental: human experience itself. Of course, "human experience" is a vague and historically loaded concept, and a thorough hashing out of the term would require, at the very least, lengthy excursions into Jamesian pragmatism, psychobiology, and Buddhist phenomenology. But for the moment let’s just think of human experience as the phenomenal unfolding of awareness in real time, a movement which tugs against the network of concepts and significations while tending toward the condition of more direct sensation or intuitive perception. In other words, experience may not be able to escape the prisonhouse of language, but it willingly sticks its nose out the barred window and inhales.

Many semiotic and structuralist arguments suggest that we are creatures of language, that nothing, either sensations or intuitions, escapes the domain of signs. But one can just as easily argue that everything that arises in consciousness is experience — that memory, analysis, and reflection all arise in the phenomenal stream, the loops and twists, of James’ "stream of consciousness." As a compromise between these two positions, imagine that you possess an analog Consciousness slider that runs from the nearly totally linguistic on one side to the pure intensities of sensation on the other. While avoiding a strict divide, your thought gizmo allows you to make a gradual though clear distinction between meaning and sensation.

Take the example of a rollercoaster. Obviously roller-coasters exist within a network of symbolic associations. But the act of subjectively submitting your bodymind to a rollercoaster ride, and undergoing the resulting thrills of adrenaline, fear, and gut-fluttering sensation, cannot be directly assimilated to the network of significations that constitute the meaning of rollercoasters. The same point can be made about recreational drugs. Obviously the drug experience is mediated by culture, by expectations, rituals, and social stories about the meaning and value of certain compounds. But we are sticking our heads in the sand if we insist that the evident physiological changes induced by drugs do not correspond with real psychological changes, not only in the content of subjective experience (its images, pleasure fluxes and meanings), but in the more fundamental cognitive parameters which structure experience in the first place.

At the same time, these non-semiotic elements of experience can also function as passageways to new regimes of signs. Fans of psychedelics often find themselves plunging into incoherent or abstract deformations of perception and sensation, only to "break through" into very strange, but nonetheless solid and coherent, worlds of meaning. Immersive works of art or entertainment are also rarely content to simply produce a new range of sensations. Instead, they often function as portals into "other worlds." Following a similar development, the rollercoaster grows into themed adventure rides like Universal Studio’s Back to the Future attraction. These other worlds, of course, are composed of the same sorts of signs that make up our shared human construct — indeed, they often detach those signs from conventional reality, recombining or morphing them within the more malleable zone of the virtual. But in order to successfully boot up these new semiotic universes within a users’ consciousness, the media technology must directly engage the machinery of human perception (proprioception, 3d audio, etc) on a "subliminal" level. In other words, immersive worlds are constructed on a platform of rejiggered experience.

And so we enter the era of what I’m calling Experience Design. A quick scan of our sociocultural landscape suggests that, in terms of artistic practices, mass entertainment, sports, and emerging technologies of pleasure, productive forces are increasingly targeting experience itself — that evanescent flux of sensation and perception that is, in some sense, all we have and all we are.

Let’s begin with the rise of the so-called "experience economy." On one level, this describes an apparent shift within the consumption patters of the younger, more technologically savvy elite, a shift away from the hoarding of material goods and status symbols to the hoarding of novel, exciting, and challenging experiences. (Dennis Tito’s 20 million-dollar space holiday on MIR is the paragon here). The experience economy of the super-rich also dovetails with broader cultural trends, including the dramatic intensification of tourism over the last few decades — a process which offers us increasingly specialized, adventurous, and exotic packages (guzzling ayahuasca with Peruvian shamans, caving in Belize, visiting real live monks in Bhutan). We have also seen a heightened interest in technologically-mediated outdoor activities like rock climbing or wind surfing, along with the rise of "extreme sports," which have little to do with sports as contest and much to do with the production of subjective intensity. The extreme example here is the Bungie jump, which requires neither skill nor exertion beyond the passive willingness to undergo the death-defying neurotransmitter and adrenaline rush that hits the nervous system.

The turn towards increasingly raw experience also marks a number of developments within media and entertainment, including the often-remarked descent to the lowest common denominator of sex, violence and the gross-out stunts of MTV’s Jackass ("Don’t try this at home!"). Over the last ten or fifteen years we have also seen the rise of a new kind of film, one which features amazing special effects, but which otherwise sucks. Whether or not we judge such films to be good, or even worthwhile, depends on how much we accept the new regime of special effects as a semi-autonomous component of cinema whose art is largely devoted to stimulating immediate sensations and visceral — rather than symbolic or narrative — emotion. A similar logic comes to the fore in many computer games and mass applications of virtual reality technologies in amusement parks and arcades, all of which strive for the quality of "immersion" — which is often just another word for simulated experience. Meanwhile, the language of "experience" has become thoroughly integrated into multimedia design, even in the relatively low-bandwidth tricks and offerings that commercial websites use to capture sticky eyeballs.

***

In the musical sphere, we can see a similar shift in the rise of raves, where intense lights, sounds, and projection screens combine to create a visceral, often collective experience of intensity and atmosphere. These effects (and affects) differ markedly from the more traditional identifications available through even the most Dionysian rock concerts. Another crucial ingredient to the rave experience is drugs. As Simon Reynolds has argued, many elements of electronic dance music, including non-sonic elements like light sticks and Vicks Vap-o-rub inhalers, emerged because of the particular effects they produce in a suitably tweaked mind. In fact, psychoactive drugs are in some ways the ultimate "technology of experience," and establish a basic model for Experience Design. The rapid advances in psycho-pharmacology, both corporate and underground, have given rise to a flood of consciousness-modifying substances which promise to both suppress unwanted dimensions of human experience (depression, anxiety) and open up novel spaces of perceptual and cognitive effects to immediate exploration.

Obviously, a startlingly broad range of phenomena can be placed under the umbrella of Experience Design, and such breadth is often suspicious. But despite a number of crucial problems that are overlooked in this acute generalization, it seems crucial to recognize and emphasize the continuity, rather than the divergence, between contemporary practices that target the human sensorium. Across the fields of art, architecture, media, music, pharmacology, even spirituality, we are moving towards the intentional and multi-dimensional stimulation and production of a complex range of increasingly immediate human responses, including the direct induction of classic "altered states of consciousness." These responses extend far beyond (and below) the traditional object of communication: the conscious human subject conceived as a rational agent and a reader of meanings. In other words, as science, pharmacology, and media technology deepen their understanding of how the human nervous system joins with the ever mercurial psyche to produce a lived sense of reality, these knowledges are becoming integrated into the engines of cultural production. To put it crudely, our cultural technologies are becoming less like books and songs, and more like rollercoasters or drugs.

Media art, with its connections to both alternative culture and critical theory, is curiously arrayed when it comes to Experience Design. On the one hand, its theoretical savvy frees it from the traditional notion of the subject as an autonomous agent of meaning, while also increasing the willingness to play with the construction of subjectivity. Yet many artists, writers and theorists remain queasy about the more technoscientific knowledges and practices involved in understanding and producing human subjectivity within the increasingly technical domain of psychology. This reflexively critical response to technoscientific discourses is mirrored in a similarly pervasive set of responses to spiritual or ritual discourses, which also fundamentally engage experience and the production of altered states of consciousness. While artists are becoming more overtly concerned with bioscience and spirituality alike, both of these flinches continue to result in art which stresses critical distance over the direct mobilization of effects and novel zones of becoming. This distance is valuable, but insufficient. However legitimate, skepticism and distrust of hard psychology, neural science, and psychobiology should not cut the (post)humanistic world off from direct engagement with the proliferating technologies of subjectivity.

For one thing, many other sectors of society are perfectly happy to employ these same tools to far more chilling ends. Advertising and marketing are only the most obvious examples on what I would call the right wing of Experience Design. ("Black magic" would perhaps be a more appropriate term, but I will leave the occult dimension of Experience Design aside for now). Here the target is often demonstrably irrational: an instinctive, un-self-aware subject whose inchoate fears and desires are organized around commodities or institutions. Though one must always beware of excessive fears over "subliminal" advertising, mallrats with sensitive noses will also recognize the pine and spice scents pumped into malls around Christmas time. Some slot machines are now equipped with high-tech smell emitters because certain scents have proven to keep individuals at the machines longer. Whether or not these cues are culturally determined is beside the point. What’s important is that at the moment, these stimulants aim for a technical zone of influence below "propaganda," which is still linked explicitly to a field of meanings. Instead they directly attack the limbic system, drawing the subject into a deeper, more immersive activity. As the West embarks on a Shadow War against terrorism, the tools of propaganda and psychic management alike can only proliferate.

***

By embracing the tools of Experience Design, media artists have and can continue to critique our expanding technosphere while also probing its capacity for beauty, pleasure, and novel perceptions -- even wisdom. Artists are uniquely placed to interrogate the production of technological experience, and to question the dominant experiences which are being engineered and renormalized by massive commercial engines of subjectivity. But this critical function must be coupled with experiment, with the willingness to creatively participate in the larger cultural process of re-engineering subjectivity, of pushing the envelope of experience. This is not necessarily a matter of becoming high tech -- relatively low-tech artists like Gary Hill and Bill Viola have made great strides in this direction. But it is a matter of directly engaging, not simply the new technologies, but the underlying technical "material" of subjectivity itself.

Finally, I believe this turn towards experience, in art and technology, is related to the growing embrace of the discourses and practices of spirituality. Whether or not it is defined or encountered within the context of faith traditions or not, "spirituality" largely emphasizes the use of subtle "psychological" techniques and practices to open up and transform our existential, personal, subjective encounter with the world and the self. At its best, the global turn towards meditation, yoga, healing prayer, trance dancing, and practices of loving-kindness reflects a search for a higher tone of experience itself, not a hunger for new consoling beliefs. The secular spirituality of self-help books, brain machines, and leadership seminars can also be seen as a species of "Experience Design" in that it emphasizes changing, or reprogramming, your direct experience of your self in the world. However, here the underlying intentions — which in some sense make or break spiritual aspiration — often leave much to be desired.

Media artists are uniquely placed to explore this emerging world of spirituality without falling into the dogmatic or New Age traps that swallow up so many true believers. Altered states of consciousness are real, and as our media technologies get better at drawing us in and out of them, artists and other non-coercive proponents of the human spirit (or whatever you want to call it) need to become familiar with these states, not simply as a source of inspiration, but as modes of expression, communication, and confrontation itself. By recognizing that the material that we are now focused on is not technology but human experience itself, then we take a step closer to that strange plateau where our inner lives unfold into an almost collective surface of shared sensation and reframed perception — a surface on which we may feel exposed and vulnerable, but beginning to awake.

EMagin Z800 3D Visor

The eMagin Z800 3D Visor is the first product to deliver on the promise of an immersive 3D
computing experience. Anyone can now surround themselves with the visual data they need without the limits of traditional displays and all in complete privacy. Now, gamers can play
"virtually inside" their games, personally immersed in the action. PC users can experience
and work with their data in a borderless environment.

360 degree panoramic view
Two high-contrast eMagin SVGA 3D OLED Microdisplays deliver fluid full-motion video in more than 16.7 million colors. Driving the user’s experience is the highly responsive head-tracking system that provides a full 360-degree angle of view. eMagin’s specially developed optics deliver a bright, crisp image.

Weighing less than 8 oz, the eMagin Z800 3D Visor is compact and comfortable. While the eMagin OLED displays are only 0.59 inch diagonal, the picture is big – the equivalent of a 105-inch movie screen viewed at 12 feet.

Only eMagin OLED displays provide brilliant, rich colors in full 3D with no flicker and no screen smear. eMagin’s patented OLED-on-silicon technology enhances the inherently fast refresh rates of OLED materials with onchip signal processing and data buffering at each pixel site. This enables each pixel to continuously emit only the colors they are programmed to show. Full-color data is buffered under every pixel built into each display, providing flicker-free stereovision capability.

READ SPECIFICATIONS


The Z800’s head-tracking system enables users to “see” their data in full 3Dsurround viewing with just a turn of the head. Virtual multiple monitors can also be simulated. Designers, publishers and engineers can view multiple drawings and renderings as if they were each laid out on an artist’s table, even in 3D. The eMagin Z800 3D Visor integrates state-of-the-art audio with high-fidelity stereo sound and a built-in noisecanceling microphone system to complete the immersive experience.

* Brilliant 3D stereovision with hi-fi sound for an immersive experience
* Superb high-contrast OLED displays delivering more than 16.7 million colors
* Advanced 360 degree head-tracking that takes you “inside” the game
* Comfortable, lightweight, USB-powered visor; PC compatible

Latest Technologies presented at CES 2005, Las Vegas

Brand New Head Mounted Display by 3D visor
http://www.3dvisor.com/

MP4 Players 1
http://www.gwaytech.com.tw/

MP4 Players 2
http://www.ktipromo.com/home.aspx


MP4 Players 3
http://www.iperris.com/

Play videogames to get fit

From WIRED

LAS VEGAS -- Consumer electronics companies commonly cater to couch potatoes by pitching bigger television sets, more mesmerizing video games and remote controls that can place online orders for pizza. But a small cadre of entrepreneurs at the world's largest technology exposition hope their gizmos make you work up a sweat.

Company executives insist that "exergaming" or "exertainment" -- the marriage of physical exercise and video gaming -- is becoming a hot new niche, and the most bullish aficionados say it might even help reduce the nation's obesity epidemic.

The PlayZone was tucked into a back corner of a tent outside the main convention center, far from the gargantuan exhibits by Samsung, Sony, Panasonic and other popular brands.

Although scents reminiscent of a gym sometimes wafted out of the zone, the jam-packed area was popular with retailers and analysts. Six exhibitors -- many startups new to CES -- showed off digital putting greens, optical sensors in miniature dance floors, biofeedback devices and cutting-edge workout contraptions.

One race car simulation contraption -- Kilowatt Sport from Laurel, Maryland startup Powergrid Fitness -- looked similar to a NordicTrack cross-country ski machine hooked up to a wide-screen plasma television. Moving the hand controls while trying to stand up straight on the $800 machine requires extensive flexing of the muscles in the arms, back, abdominal area and thighs.

But most of the PlayZone devices, often played on PlayStations and Xboxes, didn't feel like exercise at all -- exactly what many exertainment companies like to hear.

"The most common question I get is, 'How is this exercise? I just don't see how this is a workout,'" said Abigail Whitting, customer support manager for Kilowatt, which won a CES innovation award. "But it will tone you. It is a workout."

Some exertainment executives say their gizmos can help trim the nation's expanding waistlines -- especially among children, who might be tricked into working out if they think they're merely playing a video game.

According to the Centers for Disease Control and Prevention, 16 percent of boys and 14.5 percent of girls ages 6 to 11 were obese in 1999 and 2000, the latest years studied. That compares with 4.3 percent of boys and 3.6 percent of girls from 1971 to 1974. A sedentary lifestyle was a big contributor to the increase, the CDC said.

"If anything can get your kids off the couch, this is it," said Shawn Clement, North American sales manager for Electric-Spin, the Canadian maker of the $250 Golf LaunchPad. "The whole idea is to get physical, not get lazy."

LaunchPad includes a small putting green with optical sensors within the turf and a tethered, regulation-weight ball that players knock off a standard tee. Players use their own clubs.

Its software has a swing analysis to measure the ball's speed, curve path and other statistics based on the club's trajectory. Serious players may disconnect the tether and use a real ball at an outdoor course, then get real-time analysis of each swing from a laptop computer.

"This is a great way to promote activity," Clement said. "It's not just your average video game."

But medical experts are skeptical. Although they applaud manufacturers for getting people off the couch, they caution against relying on technology alone to slim and tone the record number of out-of-shape Americans. They say individuals, communities, private industry and governments should work together to tackle the problem.

"These video games are certainly helpful but they're not going to solve the obesity epidemic because it's simply too overwhelming," said Frank Hu, a professor of nutrition and epidemiology at Harvard.

Hu authored a study published last month of 116,500 women, finding that people who were physically active but obese were almost twice as likely to die as those who were both active and lean. The Harvard report contradicted a popular notion that exercise alone -- regardless of weight or diet -- is enough to maintain a healthy lifestyle.

But experts' pessimism didn't dampen enthusiasm of Jason Enos, product manager for Konami Digital Entertainment, who soaked through his T-shirt after hours of demonstrating his company's smash hit, Dance Dance Revolution. Players tap their feet to the correct circle on a floor pad, based on cues on the screen.

Advanced levels require fancy footwork, but players work up a sweat even on the easiest level. Players may enter height and weight to determine calories burned per minute, and they may compete against 15 other people worldwide.

Since December 2003, the Japanese company has sold more than 2 million copies of the game -- a teen phenomenon at Japanese and American arcades in the early '90s -- for Sony's PlayStation systems. The software and plastic floor mat sell for $60.

"It's definitely a workout, and it's not nearly as boring as a stationary bike," said Enos, wiping sweat from his brow. "It breaks the mold of the passive video game genre."

10:15 Posted in Serious games | Permalink | Comments (0) | Tags: serious gaming

Jan 09, 2005

Presence Research in Europe: Economic and Social

Presence Research in Europe: Economic and Social
Prospects


PREP 05


March 17 - 18, 2005
Hanover, Germany

Virtual and Augmented Reality applications diffuse into more and
more realms of business, research, and every day life. To make
people feel "present" in mediated environments is the goal of
many new technologies. But do we exploit the capacities of those
"Presence" technologies to full extent? What do the R & D
departments know that has not been adopted in applied
contexts? What do businesses and governments expect from
future VR and AR research? PREP 05 is the forum to discuss
these questions at a European level.

PREP 05 brings together leading researchers, industry
representatives and policy makers who are working on
development, applications and knowledge dissemination in the
domain of Presence. The concept of Presence is one of the
major research topics in the VR and AR communities, and the
European Commission is very actively supporting international
projects on Presence. PREP 05 is the 6th biannual gathering of
those projects, which are organized under the "Future Emerging
Technologies" Section of the Directorate General "Information
Society".

PREP 05 addresses the visions of how to use "Presence"
technologies for commercial success and the public good.
Reflecting the broad range of domains in which Presence is of
major relevance, PREP 05 will feature expert presentations on
VR-based business communication, tele-medicine, e-learning,
and entertainment. Moreover, invited speakers will introduce
European political visions on the economic and social prospects
of Presence research.

Experts from media technology corporations, e-government
projects, applied research centers, academic institutions, and
organizations who actually use VR / AR systems are invited to
join PREP 05, to share their ideas on the future of the field, and
to learn about the visions of leading experts in the field. PREP 05
will take place immediately after CeBIT, the world's leading IT
exhibition, and provides an excellent environment for networking
and exchange of ideas.

We cordially invite you to join us for PREP 2005 and contribute
your expertise and visions to the fields of VR, AR, and
"Presence" technologies.

For further information, please visit www.prep2005.info
<http://www.prep2005.info/> .


Conference Chair

Professor Peter Vorderer, Ph. D.
Annenberg School for Communication
University of Southern California
Los Angeles, USA

Jan 06, 2005

CRC-Clinical Cyberpsychology New Investigator Award

For a presentation of outstanding research quality

at the CyberTherapy 2005 conference

The aim of this prize is to reward the presentation of strong methodological
studies at the Cybertherapy conference. The recipient has to be a researcher
who is new to the field of cyberpsychology. It is open to both oral or poster
presentations and to researchers from all countries and disciplines.

For more information about the CyberTherapy conference and deadlines for
posters and oral presentations, see
http://www.interactivemediainstitute.com/conference2005.org/index.htm .

The award is delivered by Stéphane Bouchard, chairholder of the Canada
Research Chair in Clinical Cyberpsychology. It includes a certificate and a
check of 1 000 $ US.


Rules of attributions:

The first criterion to assess the submissions is the scientific merit of the
study. Rigorous designs, reliable measurements, adequate sample sizes,
appropriate statistical analyses and strong control conditions are all
significant assets. The scientific quality of the content of the presentation
will also contribute (i.e., clear and replicable descriptions of the
methodology), but not the graphic quality of the presentation (e.g., nice
images or videos).

The recipient must be the first author of the presentation, have the oral /
poster accepted for the conference, submit the presentation in time for the
award, pay the registration dues and personally attend to the conference. His
or her intellectual contribution must be significant (e.g., not the main
results of a funded study planned and conducted by a senior researcher in the
field but presented at the conference by a student). In case of doubt, please
explain the situation in writing when submitting your presentation to the
award.

A new investigator in the field is defined by meeting one the criteria below.
The applicant must stated in writing which criteria are met when sending the
submission:

- currently a university student;

- currently a post-doc researcher;

- currently an autonomous researcher but not having published as first
author more than five peer-reviewed papers on VR or cyberpsychology since the
end of the Ph.D. or the post-doc;

- currently an autonomous researcher but not having received more than
two major research grants on VR or cyberpsychology.

The content of the presentation must be submitted no later than two weeks
ahead of the conference to stephane.bouchard@uqo.ca
. Evaluation of the submissions will be
conducted during the days prior to the conference, based only on this
document. The recipient will be informed publicly during the conference.

All reviews will be made by Stéphane Bouchard, excepted for presentations
involving collaborators to the Chair (e.g., students, co-investigators). The
best submissions will be selected and ranked-ordered by Stéphane Bouchard
according to their scientific merit. Submission from collaborators to the
Chair will be reviewed by an independent reviewer (this year, Brenda
Wiederhold), compared to those rank-ordered and placed accordingly in the
ranks. To maximise impartiality, the decision of the independent reviewer will
be final.

If you have any question about this award, please contact Stéphane Bouchard by
e-mail or by phone (819-595-3900-ext. 2360).

The creation of this award is made possible by a Canada Research Chair (CRC)
grant (www.chairs.gc.ca ) awarded to Stéphane
Bouchard for the CRC in Clinical Cyberpsychology. It is not awarded by the CRC
program or by the Cybertherapy organisation committee.

Eyetop Multimedia: Mobile Virtual Reality at hand!

From Eyetop web-site

The EYETOP DVD personal video entertainment system gives you the freedom to watch your favorite DVD’s while you are commuting, waiting for a plane, or just relaxing in your easy chair. The high quality private image is like having your own personal home theatre…on the go. Imagine watching a favorite video while at the same time being able to move about….a walkabout video system!

Relaxing at home, commuting, flying, or even in a car*: you can now watch your favorite movie anytime! Just wear and watch! Eyetop DVD is the first video player that you can use as a simply as your favorite portable audio player: lightweight, fully portable and easily storable. Just choose your favorite movie, put your eyetop glasses on anywhere, and enjoy!

The system is preconnected and assembled in a ready to use format, just add batteries and presto….your own private video entertainment system.

Eyetop DVD, is first of all a DVD player as small as a CD player. Compact and powerful, it can read almost any disc format, DVD+ & - R & RW, MP3 and CDs and even Kodak photo cds. . With 4 hours of use, you can watch 2 movies without recharging the battery!

The EYETOP video glasses and DVD player may also be used separately. The EYETOP video glasses will accept any standard video input from a camcorder, video game, VCR or whatever. Just think how much more exciting video gaming can become wearing EYETOP video glasses. You can also plug the DVD player directly to the video composite input of your TV set (yellow plug) and turn it into a miniature table top multiformat DVD player.

The video image in the EYETOP video glasses is quickly optimized by easy vertical and horizontal adjustments. And there is a focus control for crystal clear sharp imagery. The individual electronic control unit allows additional adjustments to color, saturation, contrast and hue, just like your big screen TV.

Eyetop DVD also has a miniaturized active matrice LCD screen and earbuds embedded in an eyewear that fits all!
Fully adjustable thanks to its 2 ergonomic settings (vertical and horizontal screen adjustment) and its 4 image settings. Discover a high definition colored screen with a size equivalent to the one of a laptop in front of you!

Enjoy your movie on this integrated screen (eyetop patented technology) and earbuds while keeping an eye on things around you.You do not need to store it when standing or walking from point to point. You can have a personal home theatre experience anywhere.

Jan 05, 2005

Computer generated brain surgery to help trainees

FROM THE PRESENCE-L LISTSERV:

[From E-Health Insider (<http://www.e-health-insider.com/news/item.cfm?ID=988>)...


Researchers at the University of Nottingham have developed a
virtual reality brain surgery simulator for trainee surgeons that
combines haptics with three-dimensional graphics to give what
they claim is the most realistic model in the world.

A 'map' of the brain surface is produced by the software, which
also renders the tweezers or other surgical implement and shows
any incisions made into the virtual brain. The simulator is
controlled by a device held by the user, which uses a robotic
mechanism to give the same pressure and resistance as it would
if it were touching a real brain.

Map of brain on virtual surgery simulator

Dr Michael Vloeberghs, senior lecturer in paediatric neurosurgery
at the University's School of Human Development, who led the
development team, said that the new system would benefit
trainees: "Traditionally a large amount of the training that
surgeons get is by observing and performing operations under

supervision. However, pressures on resources, staff shortages
and new EU directives on working hours mean that this teaching
time is getting less and less.

"This simulator will allow surgeons to become familiar with
instruments and practice brain surgery techniques with
absolutely no risk to the patient whatsoever."

The pilot software was developed with the Queen's Medical
Centre, in Nottingham, which contains a Simulation Centre in
which dummies are often used for surgical training.

Dr Vloeberghs says that the haptic system is an improvement on
the existing system: "Dummies can only go so far – you're still
limited by the physical precense, and you can't do major surgery
on dummies... you can simulate electrically and phonetically what
is happening, but nothing more than that."

Adib Becker, Professor of Mechanical Engineering at the
university, said that the technology could be developed for the
future, and that brain surgery online could even be possible: "If
you project maybe four or five years from now, it may be possible
for a surgeon to operate on a patient totally remotely.

"So the surgeons would be located somewhere else in the world
and can communicate through the internet, and can actually feel
the operation as they are seeing it on the screen."

The team hopes that the piloted software, which was funded by a
grant of £300,000 from the Engineering and Physical Sciences
Research Council (EPSRC), will help train surgeons to a higher
level before their first operation on live patients, thereby
increasing safety.

Jan 04, 2005

Experience Colors!

Art | Color becomes all in Arcadia installation

By Edward J. Sozanski
Inquirer Columnist

Olafur Eliasson's installation at Arcadia University, Your colour
memory, might be the most intense and disorienting sensory
experience you'll ever have.

Being inside this oval space suffused with the most vivid color
imaginable, shifting at random through every hue of the spectrum
and then some, is like standing inside a sunrise.

The installation that transforms the Arcadia gallery into a color-
immersion chamber is complex technically, but easily described.

Inside the rectangular, high-ceilinged room, Eliasson has
constructed an oval, open-topped chamber with a small chamber
at one side that's curtained off.

One enters at the side, at a point where the encircling wall
separates like the first coil of a spiral. The inside surface of the
oval is made of translucent plastic stretched as taut as a
drumhead.

Inside the wall, whose outside surface is opaque, Eliasson has
installed a series of computer-controlled lights and filters. These
lights generate random sequences of primary and secondary
colors that change unpredictably.

To say that the light in this chromatic bombardment is intense
understates the effect, which admittedly varies for each visitor.
We all see and react to colors differently, but I doubt anyone with
normal color vision would deny that Eliasson's installation
transforms color into a physical presence.

19:10 Posted in Cyberart | Permalink | Comments (0)

Transpersonal Psychology

From Wikipedia, the free encyclopedia.


Transpersonal psychology is a school of psychology, considered by proponents to be the '4th force' in the field. It was orginally established in order to pursue further knowledge about issues connected to mystical and transcendent experiences. According to its proponents, the traditional schools of psychology — behaviorism, psychoanalysis and humanism — have failed to include the 'transegoic' elements of human existence, such as religious conversion, altered states of consciousness and spirituality. Transpersonal psychology combines insights from modern psychology with insights from the worlds contemplative traditions, both east and west.

A major motivating factor behind the initiative to establish this school of psychology was the already published work of Abraham Maslow regarding human peak experiences. Maslow was also one of the initiators behind the publication of the first issue of the Journal of Transpersonal Psychology in 1969, the leading academic journal in the field. This was soon to be followed by the founding of the Association for Transpersonal Psychology in 1972. Today Transpersonal Psychology also includes approaches to health, social sciences and practical arts. According to the Association for Transpersonal psychology the Transpersonal perspective includes such research interests as: Psychology and psychotherapy, Meditation, spiritual paths and practices, Change and personal transformation, Consciousness research, Addiction and recovery, Psychedelic and altered states of consciousness research, Death, dying and near death experience (NDE), Self-realization and higher values, The mind-body connection, Mythology and Shamanism and Exceptional Human Experience (EHE).

However, most psychologists do not hold strictly to traditional schools of psychology; most psychologists take an eclectic approach. Furthermore, the phenomena listed are considered by standard subdisciplines of psychology, religious conversion falling within the ambit of social psychology, altered states of consciousness within physiological psychology, and spiritual life within the psychology of religion. Transpersonal psychologists, however, disagree with the approach to such phenomena taken by traditional psychology, and claim that they have typically been dismissed either as signs of various kinds of mental illnesses or regression to infantile stages of psychosomatic development. One must not confuse transpersonal psychology with parapsychology- a mistake frequently made due to the unenviable academic reputation of both branches, and the eerie atmosphere surrounding the subjects investigated.

Although there are many disagreements with regard to transpersonal psychology, one could succinctly lay out a few basic traits of the field:

* transpersonal psychology is rooted in religious psychological doctrines expounded in: Zen Buddhism, Kabbalah, Gnosticism, Sufism, Vedanta, Taoism and Neoplatonism
* by common consent, the following branches are considered to be transpersonal psychological schools: Jungian Depth Psychology or, more recently rephrased by James Hillman, a follower of Carl Jung as Archetypal Psychology; Psychosynthesis founded by Roberto Assagioli and schools of Abraham Maslow and Robert Tart.
* Some transpersonal psychologists claim other authors, for example William James, as supporting their approach. This is controversial; it is unlikely that James ever used the expression "transpersonal" to describe his approach to psychology.
* Doctrines or ideas of many colorful personalities who were or are "spiritual teachers" in the Western world are often assimilated in the transpersonal psychology mainstream scene: Gurdjieff, Alice Bailey or Ken Wilber. This development is, generally, seen as detrimental to the aspiration of transpersonal psychologists to gain firm and respectable academic status.

All transpersonal psychologies, whichever their differences, share one basic contention: they claim that human beings possess the supraegoic centre of consciousness that is irreducible to all known states of empirical, or, better, "ordinary" consciousness (sleep, waking state, ...). This root of consciousness (and human existence, for some schools) is frequently called "Self" (or "Higher Self"), in order to distinguish it from "self" or "ego", which is equated to the seat of ordinary everyday waking consciousness. However, they differ in the crucial traits they ascribe to the Self:

* the supraegoic root of consciousness (the Self) survives bodily death in some transpersonal schools; for others, it dies with the body
* for ones, the Self is dormant and latent; for others, it is ever watchful and precedes empirical human consciousness
* some think that Self is mutable and potentially expandable; others aver that it is perfect and completely outside of time-space, and that only "ego" is subject to temporal change

Currently, transpersonal psychology (especially Archetypal Psychology of Carl Jung and his followers) is integrated, at least to some extent, to numerous psychology departments in US and European Universities; also, transpersonal therapies are included in many therapeutic practices.


Upcoming VR-conferences

The 3rd Annual Virtual Reality, Associated Technologies and Rehabilitation Conference, University of Haifa, Israel, March 7-9, 2005, http://hw.haifa.ac.il/occupa/LIRT/

IEEE VR2005, Bonn, Germany, March 12-16, 2005, http://www.vr2005.org/view.php?nid=0
The 7th Annual Laval Virtual Reality International Conference and Exhibition, Laval, France, April 20-24, 2005, http://www.laval-virtual.org/en/index.php

CyberTherapy2005: A Decade of VR, Ramada Plaza, Basel, Switzerland, June 6-10, 2005, http://www.interactivemediainstitute.com/conference2005/

The 11th International Conference on Human-Computer Interaction, Las Vegas, Nevada, USA, July 22-27, 2005, http://www.hcii2003.gr/general/hcii2005.asp

The 4th International Workshop on Virtual Rehabilitation, Catalina Island, Los Angeles, California, September 19-21, 2005, http://www.iwvr.org/2005

MMVR13: Medicine Meets Virtual Reality

Medicine Meets Virtual Reality 2005

Long Beach, California, January 26 - 29, 2005

MMVR is the premier forum for computer scientists and physicians who develop, refine, and promote advanced, data-centered tools for clinical care and medical education. MMVR stimulates interdisciplinary networking and collaboration for improved research, validation, and commercialization.

Primary MMVR foci are medical and surgical simulation and information-guided diagnosis and therapy, along with supporting technologies: imaging, modeling, haptics, visualization, robotics, and informatics. Lectures, posters, workshops, and panels educate creators and advocates of emerging technologies.

MMVR encourages a vigorous discussion of current progress – from engineering groundwork, through assessment and validation studies, to experience with clinical and academic utilization and commercialization.

MMVR engineers, clinicians, and educators are a vanguard community of thinkers envisioning and making real the future of healthcare and medical education.



Jan 03, 2005

Free ISSUE of Journal of NeuroEngineering and Rehabilitation

Free Papers about VR available from the Journal of Neuroengineering and Rehabilitation

Simulator sickness when performing gaze shifts within a wide field of view optic flow environment: preliminary evidence for using virtual reality in vestibular rehabilitation
Patrick J. Sparto, Susan L. Whitney, Larry F. Hodges, Joseph M. Furman, Mark S. Redfern
Journal of NeuroEngineering and Rehabilitation 2004, 1:14 (23 December 2004)
[Abstract] [Provisional PDF]

Research
Considerations for the future development of virtual technology as a rehabilitation tool
Robert V. Kenyon, Jason Leigh, Emily A. Keshner
Journal of NeuroEngineering and Rehabilitation 2004, 1:13 (23 December 2004)
[Abstract] [Provisional PDF]

Review
Video capture virtual reality as a flexible and effective rehabilitation tool
Patrice L. Weiss, Debbie Rand, Noomi Katz, Rachel Kizony
Journal of NeuroEngineering and Rehabilitation 2004, 1:12 (20 December 2004)
[Abstract] [Provisional PDF]

Research
Reaching in reality and virtual reality: a comparison of movement kinematics in healthy subjects and in adults with hemiparesis
Antonin Viau, Anatol G. Feldman, Bradford J. McFadyen, Mindy F. Levin
Journal of NeuroEngineering and Rehabilitation 2004, 1:11 (14 December 2004)
[Abstract] [Provisional PDF]

Review
Motor rehabilitation using virtual reality
Heidi Sveistrup
Journal of NeuroEngineering and Rehabilitation 2004, 1:10 (10 December 2004)
[Abstract] [Provisional PDF]


Review
Presence and rehabilitation: toward second-generation virtual reality applications in neuropsychology
Giuseppe Riva, Fabrizia Mantovani, Andrea Gaggioli
Journal of NeuroEngineering and Rehabilitation 2004, 1:9 (8 December 2004)
[Abstract] [Provisional PDF]


Editorial
Virtual reality and physical rehabilitation: a new toy or a new research and rehabilitation tool?
Emily A Keshner
Journal of NeuroEngineering and Rehabilitation 2004, 1:8 (3 December 2004)
[Abstract] [Provisional PDF]




Enhanced intensive care system allows remote access to

FROM THE PRESENCE-L LISTSERV (by Mattew Lombard):


BUFFALO — Lucille Lamarca could feel her heart begin to beat
at a worrisome pace while lying alone in the intensive care unit at
Buffalo General Hospital with a heart condition.

Then from a speaker came a reassuring voice.

"Hi, I'm here," the voice said. "The nurse is on her way. You're
going to be OK."

It was the voice of a doctor who had been keeping an eye on
Lamarca from an office building miles away, via a remote camera
and a bank of computer screens.

The hospital's parent, Kaleida Health System, is among an
expanding number of hospital systems adopting "enhanced
intensive care" technology — known as eICU — that allows
critical care doctors and nurses to monitor dozens of patients at
different hospitals simultaneously, much like an air traffic
controller keeps track of multiple planes.

From the Kaleida control station Monday, health professionals
were monitoring 58 patients at two hospitals via screens that
displayed patients' diagnosis and progress, doctors' notes and
real-time vital statistics like heart rate and blood pressure. The
remote caregivers alerted their onsite counterparts to changes or
potential problems through videoconferencing at the nurses'
stations.

Kaleida, which expects to bring its three other hospitals online in
the spring, stressed the technology is meant to enhance, not
replace, onsite care by allowing doctors to more quickly catch
and respond to trouble.

Kaleida is investing $4 million in personnel and equipment,
officials said.

The technology by Baltimore-based VISICU is in use at least 18
hospital systems nationwide, according to Kaleida, which this
summer became the 9th system to go online.

"I think that it changes the quality of the care in a way that could
not be equaled, even if you doubled or tripled the staffing onsite,"
said Dr. Cynthia Ambres, Kaleida's chief medical officer.

Those familiar with the technology predicted it would become
part of the future of critical care across the country, enabling
hospitals to make the best use of a limited number of intensive
care doctors.

Leapfrog Group, a nonprofit coalition of business and other
groups working to improve hospital operations, has cited a
severe shortage of intensivists practicing in the United States —
less than 6,000 at a time when nearly 5 million patients are
admitted to ICUs each year.

Sentara Healthcare was the first system to install eICU 4 1/2
years ago and now monitors 95 beds at five of its hospitals in
southeastern Virginia and northeastern North Carolina.

Sentara officials estimate the technology allowed them to save
97 lives in 2003, while covering 65 beds.

Instead of relying on a nurse to notice a problem, having her
page a physician and then having that doctor run to the ICU to
make a full evaluation, "all that information is brought to me,"
said Dr. Steven Fuhrman, Sentara's eICU medical director.

"The camera is such that I can count eyelashes," he said,
enabling him to check the patient's ventilator, intravenous
medication and anything else in the room while talking to the
patient and onsite staff.

"It's been described here as being in the room with your hands in
your pocket," Fuhrman said.

Ambres said the in-room cameras, which are not always on, are
seen as reassuring by patients, rather than an invasion of
privacy.

Lamarca, who was hospitalized in August, agreed.

"When you're in the ICU, you're very defenseless and they were
sensitive to that," she said. "I never felt it was an invasion of
privacy," said the Buffalo woman, adding that she could tell by
the position of the camera whether it was on or off.

Jan 01, 2005

Robocup Rescue

About Simulation League

The RoboCupRescue Simulation League competition is an international evaluation conference for the RoboCupRescue Simulation Project research.

The main purpose of the RoboCupRescue Simulation Project is to provide emergency decision support by integration of disaster information, prediction, planning, and human interface. A generic urban disaster simulation environment was constructed based on a computer network. Heterogeneous intelligent agents such as fire fighters, commanders, victims, volunteers, etc. conduct search and rescue activities in this virtual disaster world. Real-world interfaces such as helicopter images synchronizes the virtuality and the reality by sensing data. Mission-critical human interfaces such as PDA support disaster managers, disaster relief brigades, residents and volunteers decide their actions so as to minimize the disaster damage.

This problem introduces researchers advanced and interdisciplinary research themes. As AI/Robotics research, for example, behavior strategy (e.g. multi-agent planning, realtime/anytime planning, heterogeneity of agents, robust planning, mixed-initiative planning) is a challenging problem. For disaster researchers, RoboCupRescue works as a standard basis in order to develop practical comprehensive simulators adding necessary disaster modules.

Dec 29, 2004

Charles Tart: Virtual Reality & Altered States of Consciousness

The link between VR and altered states of consciousness is an interesting one. One of the first researchers that have discussed this relationship is Charles Tart. According to Tart, VR can be considered a new technological model of consciousness. In particular, he suggests that "stable patterns, stabilized systems of these internal virtual realities, constitute states of consciousness, our ordinary personality, and multiple personalities"

Of particular interest is Tart's explanation of the feeling of being "present" in a computer-generated world, although in its article he doesn't use the word presence. He suggests that presence emerges from a core psychological pattern which "automatically organizes the rest of experience around itself in a way that further supports the basic pattern". He thinks that this is the same process of perceiving a constellation. A constallation is not "real", because the spatial distribution of visible stars is random. However, once a pattern has organized, it is hard not to perceive it that way.

If you want to read more about Tart's theory of virtual reality, read these online articles:

Multiple Personality, Altered States and Virtual Reality: The World Simulation Process Approach

(This is the published version of the paper, which appeared as "Multiple Personality, Altered States and Virtual Reality: The World Simulation Process Approach" in the journal, "Dissociation," Vol. 3, 222-233)

Mind Embodied! Computer-Generated Virtual Reality as a New, Dualistic-Interactive Model for Transpersonal Psychology

(Based on a speech given at the L. E. Rhine Centenary Conference on Cultivating Consciousness for Enhancing Human Potential, Wellness and Healing, Durham, North Carolina, November 9, 1991. A modified version was later published in K. Rao (Editor), "Cultivating Consciousness: Enhancing Human Potential, Wellness and Healing." Westport, Connecticut: Praeger, 1993. Pp. 123-137.)

Dec 28, 2004

Invited speech about Ambient Intelligence in Haifa

We have been invited by the Center for the Study of Information Society (Haifa, Israel) to present our framework about Ambient Intelligence.

The title of our presentation is: Ambient Intelligence: Conceptual and Practical Issues

More information can be found here:

http://infosoc.haifa.ac.il/events.htm

Dec 23, 2004

Presence 13:5 Now Available

The first journal for serious investigators of teleoperators and virtual environments, incorporating perspectives from physics to philosophy. Vol. 13, Issue 5 - October 2004 Articles Article available as PDFSAMPLE ARTICLE - FREELY AVAILABLE!The Role of Graphical Feedback About Self-Movement when Receiving Objects in an Augmented Environment Andrea H. Mason and Christine L. MacKenzie Haptic Interfaces for Wheelchair Navigation in the Built Environment Colin S. Harrison, Mike Grant and Bernard A. Conway Exploring the Roles of Information in the Manual Control of Vehicular Locomotion: From Kinematics and Dynamics to Cybernetics Max Mulder, René van Paassen and Erwin Boer The Importance of Stereo and Eye Coupled Perspective for Eye-Hand Coordination in Fish Tank VR Roland Arsenault and Colin Ware Does the Quality of the Computer Graphics Matter when Judging Distances in Visually Immersive Environments? William B. Thompson, Peter Willemsen, Amy A. Gooch, Sarah H. Creem-Regehr, Jack M. Loomis and Andrew C. Beall Limited Field of View of Head-Mounted Displays Is Not the Cause of Distance Underestimation in Virtual Environments Joshua M. Knapp and Jack M. Loomis An Independent Visual Background Reduced Simulator Sickness in a Driving Simulator Henry Been-Lirn Duh, Donald E. Parker and Thomas A. Furness Toward Systematic Control of Cybersickness Marshall B. Jones, Robert S. Kennedy and Kay M. Stanney Postural Responses to Two Technologies for Generating Optical Flow Thomas A. Stoffregen, Benoît G. Bardy, Omar A. Merhi and Olivier Oullier [ More on Vol. 13, Issue 5 - October 2004 | Presence: Teleoperators and Virtual Environments Home Page | Presence: Teleoperators and Virtual Environments Orders and Renewals ]

Dec 21, 2004

Inexpensive 3-D technology starting to look real

By ADAM FLEMING (from www.presence-research.org) December 08, 2004 Say goodbye to your red-and-blue glasses. The once-great gimmick turned movie-house nostalgia could be in the waning hours of its twilight years, as scientists at the Pittsburgh Supercomputing Center push forward with research in the blossoming field of 3-D technology, otherwise known as stereo visualization. Stuart Pomerantz and Joel Stiles hope to lower the cost and increase the convenience of displaying images and movies in 3-D for large groups of people. “We wanted to be able to show what we do in stereo, but do it, more or less, at the drop of a hat,” Stiles said, “or at very high quality, but very low cost compared to one of these gigantic, multi-projector, multi-screen systems.” The stereo-visualization process adopted by Pomerantz and Stiles involves two separate projectors. Each projector has a linear filter in front of its lens that polarizes the image it projects. Both images are then shown on one screen that is specially designed not to depolarize the images. By wearing a pair of sunglasses, for which each lens is polarized differently, the viewer receives separate images for each eye. And that, in effect, is the essence of viewing in 3-D. “You’ve got to see different images in each eye, just as we always do naturally,” Stiles explained. While reading this article, try covering your left eye. Now cover your right eye while uncovering your left, and you’ll notice that the paper appears to shift slightly. This is because humans see in stereo by forming a composite of two images. Stereo visualization, at its best, is an imitation of this natural process. Attaching polarizing lenses to projectors is not a new development, but Pomerantz and Stiles have coupled the process with new content and playback software. “What we needed to do new was create a pipeline for creating content in the form of movie files,” Stiles said. “We wanted to use stereo as a routine thing, instead of a special case or a one-off demo.” Professors at Pitt have already incorporated stereo visualization in the classroom. Kenneth Jordan and his colleagues in the chemistry department “designed and constructed a 3-D stereo-visualization system in one of the main lecture halls in the Chevron Science Center,” according to an October 2002 article in the University of Pittsburgh Teaching Times. The system in Chevron allows professors to display complex molecules and structures in 3-D, as opposed to the flat models found in textbooks and drawn on chalkboards. With stereo visualization appearing in labs and classrooms, how long will it be until methods of 3-D are available in movie theaters, or even living rooms? For now, the technology is willing, but the space is weak. The projected file size of a feature-length film, packaged for stereo visualization, would be too big for any widely available equipment. But with constant improvements being made in the storage capacity of portable disks, there may one day be a triumphant return of 3-D movies, sans those old paper glasses.