Ok

By continuing your visit to this site, you accept the use of cookies. These ensure the smooth running of our services. Learn more.

Jun 22, 2006

EmSense

From the EmSense website

The company EmSense has announced a patent-pending headset which should allow to measure psychophysiological correlates of emotional states in a lightweight, compact form factor. According to the company, signals measured include:

  • Brainwaves
  • Heart activity
  • Breathing
  • Blinking
  • Motion

According to the company, the system could have applications in several fields, such as education, personal health & fitness, professional performance development, training and simulation

Here is a picture of the headset:

 

 

Jun 21, 2006

New wrist-worn Linux PC targets healthcare

 Via Emerging Technology Trends

 

The Zypad WL 1000 is a new hands-free PC running Linux or Windows CE, which offers wireless networking and GPS tracking. It also includes a patented orientation sensor that can be configured to induce standby when the user's arm drops. According to the manufacter (Eurotech), the Zypad could be used by healthcare or law enforcement personnel. 


Eurotech expects to have the product "on the shelf" in late July, priced at $2,500. 

14:15 Posted in Wearable & mobile | Permalink | Comments (0) | Tags: wearable

Jun 18, 2006

Seizure Detection Algorithm

Via Medgadget


 

A New Zealand based medical device company is seeking FDA approval for their seizure detection algorithm. According to the company's press release:

The data supporting the seizure detection algorithm has been presented at multiple international medical conferences over the past 18 months. The data presented indicated that the BrainZ seizure detection algorithm had higher sensitivity, higher positive predictive value, higher correlation, and a lower level of false positive detection than two other recognized seizure detection algorithms. The latest presentation was made to the Pediatric Academic Societies' meeting in San Francisco in May 2006.
 

Company's technology in a nutshell:

 

 
The BRM2 Brain Monitor provides bilateral aEEG (amplitude-integrated EEG) displays to allow easy recognition of background EEG patterns, and EEG Waveform displays to show the raw EEG signal from each hemisphere.

Amplitude-integrated EEG (aEEG) provides a compressed display of the level (amplitude) of EEG activity.

It is useful for continuous monitoring of background EEG activity and for discriminating between normal and abnormal EEG traces.

Abnormal aEEG traces can be used to identify patients who require further neurological workup and investigations. Normal traces may be used to reassure families of the likelihood of good long term neurological outcome for their infant.

Studies show marked changes in the level and frequency of EEG activity after ischemic injury. These changes can be predictive of the extent of neurological deficit. The pathophysiologic EEG changes associated with brain injury evolve through latent and delayed phases, over several days. Prolonged monitoring over the first week after birth can be valuable, as normalization of aEEG recording is associated with an improved outcome compared to a persistently abnormal recording. The longer the period of monitoring the more accurately the severity of brain injury can be assessed.

Seizure activity has often been monitored by clinical assessment alone, however a large proportion of seizure activity is either difficult to assess by examination or has no clinical manifestation. Bedside monitoring with aEEG traces can be used to identify seizure-like events in real time, with review of the raw EEG trace recommended for event validation. EEG monitoring can be used to guide the affect of anticonvulsant therapy.

aEEg can also be used to help identify those patients who are most likely to benefit from new hypothermia therapies. These therapies may improve outcomes in infants exposed to hypoxic ischaemic encephalopathies.

 

18:30 Posted in Research tools | Permalink | Comments (0) | Tags: research tools

Artificial hippocampus to help Alzheimer's patients

Via Medgadget

According to journalist Jennifer Matthews (News14Carolina), neuroscientist Theodore Berger has developed the "first artificial hippocampus", which should help people suffering from Alzheimer's disease to form new memories.

01___30-tech.jpg

 
"There's no reason why we can't think in terms of artificial brain parts in the same way we can think in terms of artificial eyes and artificial ears," said Theodore Berger, who does research at the University of Southern California.

Berger believes this new technology will help not only Alzheimer's disease patients, but also individuals suffering from other CNS diseases such as epilepsy, Parkinson's disease or stroke. Also, with this new technolgy, the brain would gain help in information processing. A computer chip will reroute the information, bypassing damaged area(s) of the hippocampus

Exceptional counting ability induced by temporarily switching off brain region

Via KurzweilAI.net

Applying transcranial magnetic stimulation (TMS) to the left anterior temporal lobe allow for temporary exceptional counting and calculating abilities similar to those of autistic savants, according to Allan Snyder of Australian National University. ...

 

Rheo Knee

Via KurzweilAI.net

 

MIT's Media Lab researchers have developed a prosthetic "Rheo Knee" that uses AI to replicate the workings of a biological human joint and "bio-hybrids," surgical implants that allow an amputee to control an artificial leg by thinking..

 

bio-sensor server

via Information Aesthetic

bodydaemon.jpg

 

bodydaemon is a bio-responsive web server created by media artist Carlo Castellanos, which uses biofeedback sensors to change in realtime its configuration, according to the participant's psychophysiological states

From the project's website:

BodyDaemon is a bio-responsive Internet server. Readings taken from a participant's physical states, as measured by custom biofeedback sensors, are used to power and configure a fully-functional Internet server. For example, more or fewer socket connections are made available based on heart rate, changes in galvanic skin response (GSR) can abruptly close sockets, and muscle movements (EMG) can send data to the client. Other feature's such as logging can be turned on or off depending on a combination of factors. BodyDaemon also includes a client application that makes requests to the BodyDaemon server. The client requests and server responses are sent over a "persistent" or open socket. The client can thus use the data to continuously visualize, sonify or otherwise render the live bio-data. This project is part of larger investigations focusing on the development of protocols for the transfer of live physiological and biological information across the Internet.

BodyDaemon represents the early stages of investigations into the viability of systems that alter their states based off of a person's changing physiological states and intentions - with the ultimate goal of accommodating the development of emergent states of mutual influence between human and machine in a networked ecosystem.
 


diagram

Special issue of Cognition on Neurogenomics

Cognition has a special issue reviewing the state of the art of neurogenomics, a discipline that seeks to understand the role played the the genome in the development of the brain.
 
medium_C00100277.gif
 
The issue is still in press, but it is already possible to read some of the papers on the journal's webpage. Contributors include Simon Fisher, Evan Balaban, Karin Stromswold, Bruce Pennington, James Blair, and Gary Marcus.

Jun 13, 2006

Robot with the human touch feels just like us


From: Times (UK)

A touch sensor developed to match the sensitivity of the human finger is set to herald the age of the robotic doctor.

Until now robots have been severely handicapped by their inability to feel objects with anything like the accuracy of their human creators. The very best are unable to beat the dexterity of the average six-year-old at tying a shoelace or building a house of cards.

But all that could change with the development by nanotechnologists of a device that can “feel” the shape of a coin down to the detail of the letters stamped on it. The ability to feel with at least the same degree of sensitivity as a human finger is crucial to the development of robots that can take on complicated tasks such as open heart surgery.


 

Read the full article

00:45 Posted in AI & robotics | Permalink | Comments (0) | Tags: robotics

Jun 06, 2006

Blushing Light

Re-blogged from Mocoloco

blushinglight.jpg

The Blushing Light designed by Nadine Jarvis and Jayne Potter blushes in response to the emotional pitch of a mobile phone. Through conversation, the lamp is activated by the Electromagnetic field (EMF) emitted from a mobile phone and continues blushing for 5 minutes after the call has ended; prolonging the memory of the otherwise transient conversation.

3D Topicscape

Via Information Aesthetics

medium_topicscape.2.jpg

3D Topicscape is a info visualization application that allows organizing different types of computer files (i.e. documents, images, websites etc) in a 3D landscape. The mindmaps-like approach helps users to discover hidden details and relationships between data.

from the website:

3D Topicscape is a computer software that works with you to organize and find information held in your computer. And it's a strong and flexible way for you to plan your approach on a new project even before you have collected any files or information, using an approach similar to concept or mind maps, but in 3D.

Jun 05, 2006

HUMANOIDS 2006

medium_chess-whatto04.jpg

HUMANOIDS 2006 - Humanoid Companions

2006 IEEE-RAS International Conference on Humanoid Robots December 4-6, 2006, University of Genova, Genova, Italy.

From the conference website

The 2006 IEEE-RAS International Conference on Humanoid Robots will be held on December 4 to 6, 2006 in Genova, Italy. The conference series started in Boston in the year 2000, traveled through Tokyo (2001), Karlsruhe/Munich (2003), Santa Monica (2004), and Tsukuba (2005) and will dock in Genoa in 2006.

The conference theme, Humanoid Companions, addresses specifically aspects of human-humanoid mutual understanding and co-development.

Papers as well as suggestions for tutorials and workshops from academic and industrial communities and government agencies are solicited in all areas of humanoid robots. Topics of interest include, but are not limited to:


* Design and control of full-body humanoids
* Anthropomorphism in robotics (theories, materials, structure, behaviors)
* Interaction between life-science and robotics
* Human - humanoid interaction, collaboration and cohabitation
* Advanced components for humanoids (materials, actuators, portable energy storage, etc)
* New materials for safe interaction and physical growth
* Tools, components and platforms for collaborative research
* Perceptual and motor learning
* Humanoid platforms for robot applications (civil, industrial, clinical)
* Cognition, learning and development in humanoid systems
* Software and hardware architectures for humanoid implementation

Important Dates
* June 1st, 2006 - Proposals for Tutorials/Workshops
* June 15th , 2006 - Submission of full-length papers
* Sept. 1st , 2006 - Notification of Paper Acceptance
* October 15th, 2006 - Submission of final camera-ready papers
* November 1st 2006 - Deadline for advance registration

Paper Submission
Submitted papers MUST BE in Portable Document Format (PDF). NO OTHER FORMATS WILL BE ACCEPTED. Papers must be written in English. Six (6) camera-ready pages, including figures and references, are allowed for each paper. Up to two (2) additional pages are allowed for a charge of 80 euros for each additional page.
Papers over 8 pages will NOT be reviewed/accepted.

Detailed instructions for paper submissions and format can be found at here

Exhibitions
There will be an exhibition site at the conference and promoters are encouraged to display state-of-the art products and services in all areas of robotics and automation. Reservations for space and further information may be obtained from the Exhibits Chair and on the conference web site.

Video Submissions
Video submissions should present documentary-like report on a piece of valuable work, relevant to the humanoids community as a whole.
Video submissions should be in .avi or mpeg-4 format and should not exceed 5Mb.

INQUIRIES:
Please contact the General Co-Chairs and the Program Co-Chairs at humanoids06@listes.epfl.ch

ORGANIZATION:

General Co-Chairs:
Giulio Sandini, (U. Genoa, Italy)
Aude Billard, (EPFL, Switzerland)

Program Co-Chairs:
Jun-Ho Oh (KAIST, Korea)
Giorgio Metta (University of Genoa, Italy) Stefan Schaal (University of Southern California) Atsuo Takanishi (Waseda University)

Tutorials/Workshops Co-Chairs:
Rudiger Dillman (University of Karlsruhe) Alois Knoll (TUM, Germany)

Exhibition Co-Chairs:
Cecilia Laschi (Scuola Superiore S. Anna - Pisa, Italy) Matteo Brunnettini (U. Genoa, Italy)

Honorary Chairs:
George Bekey ((U. Genoa, Italy)USC, USA) Hirochika Inoue (JSPS, Japan) Friedrich Pfeiffer, (TU Munich, Germany)

Local Arrangements Co-Chairs:
Giorgio Cannata, (U. Genoa, Italy)
Rezia Molfino, (U. Genoa and SIRI, Italy)


3rd Annual Colloquium on Online Simulations, Role-Playing, and Virtual Worlds

medium_low.jpg

3rd Annual Colloquium on Online Simulations, Role-Playing, and Virtual Worlds

October 30 - November 3, 2006, Appalachian State University, Boone, North Carolina USA

From the conference website


About the League of Worlds

The League of Worlds (LoW) annual colloquium brings together people engaged in the creation of virtual worlds and real- time simulations for educational  and training purposes. Our mission is (1) to stimulate and disseminate research and analysis regarding the theoretical, technical, and curricular developments in; and (2) to contribute towards the development of coherent frameworks for the advancement, application and assessment of educational and social uses of role-playing, simulations, and virtual worlds.


Our primary areas of interest include:

  a. theoretical analysis
  b. the development of practical applications
  c. the documentation of framework projects and case studies

  1. About the Colloquium

The League of Worlds colloquium is not an ordinary conference. This  year's theme is: "Exploring Issues in and Asking Questions about Virtual Environments." Participants are expected to challenge one another to take a fresh look at the questions that arise when people meet in virtual territories to play, to learn, and to share. Participation is purposely limited and there will be no concurrent sessions.
Instead, participants will participate in an ongoing dialogue about virtual environments, integrating their own perspectives and expertise into the conversation. The outcome of the colloquium will be a published text comprised of a scholarly narrative of the dialogue around the themes and research discussed throughout the colloquium. All LoW participants will be cited as contributors to this published work.

PROPOSAL CATEGORIES

The League of Worlds colloquium is designed to support sharing and meaningful reflection. Participants should allow one another the opportunity to share experiences, to demonstrate technologies, and to think critically. To facilitate these activities, the colloquium review committee is interested in submissions on the following topics:

  • Technologies used to create and manage virtual environments (tools, hardware, software)
  • Vision for what virtual environments could be (architecture,
    metaphors)
  • Teaching and Learning in virtual environments
  • Role playing and simulations
  • Social constructivism
  • Communication and collaboration
  • Serendipitous interactions and learning
  • Community formation in virtual environments (interaction, presentation of self, presence)
  • Culture (development of, artifacts)
  • Administrative/Technical support issues in virtual environments
  • Change (Advocacy for, dissemination and sharing of research, how teaching and learning takes place)
  • Resources (to create and/or support any of above themes)
  • Research (on virtual environments in general or in support of any of above themes)


CFP available at: http://www.leagueofworlds.com/news.php

 

 

Jun 04, 2006

Motor Imagery. A Backdoor to the Motor System After Stroke?

Stroke. 2006 Jun 1;

Authors: Sharma N, Pomeroy VM, Baron JC

BACKGROUND AND PURPOSE: Understanding brain plasticity after stroke is important in developing rehabilitation strategies. Active movement therapies show considerable promise but depend on motor performance, excluding many otherwise eligible patients. Motor imagery is widely used in sport to improve performance, which raises the possibility of applying it both as a rehabilitation method and to access the motor network independently of recovery. Specifically, whether the primary motor cortex (M1), considered a prime target of poststroke rehabilitation, is involved in motor imagery is unresolved. Summary of Review--We review methodological considerations when applying motor imagery to healthy subjects and in patients with stroke, which may disrupt the motor imagery network. We then review firstly the motor imagery training literature focusing on upper-limb recovery, and secondly the functional imaging literature in healthy subjects and in patients with stroke. CONCLUSIONS: The review highlights the difficulty in addressing cognitive screening and compliance in motor imagery studies, particularly with regards to patients with stroke. Despite this, the literature suggests the encouraging effect of motor imagery training on motor recovery after stroke. Based on the available literature in healthy volunteers, robust activation of the nonprimary motor structures, but only weak and inconsistent activation of M1, occurs during motor imagery. In patients with stroke, the cortical activation patterns are essentially unexplored as is the underlying mechanism of motor imagery training. Provided appropriate methodology is implemented, motor imagery may provide a valuable tool to access the motor network and improve outcome after stroke.

EMOSIVE: A mobile service for the emotionally triggered

Re-blogged from Prototype/Interaction Design Cluster 

emosive prototype screenshot

emosive (formerly e:sense) is a new service for mobile devices which allows capturing, storing and sharing of fleeting emotional experiences. Based on the Cognitive PrimingGo theory, as we become more immersed in digital media through our mobile devices, our personal media inventories constantly act as memory aids, “priming” us to better recollect associative, personal (episodic) memories when facing an external stimulus. Being mobile and in a dynamic environment, these recollections are moving, both emotionally and quickly away from us. Counting on the fact that near-today’s personal media inventories will be accessed from mobile devices and shared with a close collective, emosive bundles text, sound and image animation to allow capturing these fleeting emotional experiences, then sharing and reliving them with cared others. Playfully stemming from the technical, thin jargon of the mobile world (SMS, MMS), emosive proposes a new, light format of instant messages, dubbed “IFM” – Instant Feeling Messages.
 

Click to go directly to an emosive demo

Digital Chameleons

Digital Chameleons: Automatic Assimilation of Nonverbal Gestures in Immersive Virtual Environments

Jeremy N. Bailenson and Nick Yee

Psychological Science, 16 (10)

Previous research demonstrated social influence resulting from mimicry (the chameleon effect); a confederate who mimicked participants was more highly regarded than a confederate who did not, despite the fact that participants did not explicitly notice the mimicry. In the current study, participants interacted with an embodied artificial intelligence agent in immersive virtual reality. The agent either mimicked a participant’s head movements at a 4-s delay or utilized prerecorded movements of another participant as it verbally presented an argument. Mimicking agents were more persuasive and received more positive trait ratings than nonmimickers, despite participants’ inability to explicitly detect the mimicry. These data are uniquely powerful because they demonstrate the ability to use automatic, indiscriminate mimicking (i.e., a computer algorithm blindly applied to allmovements) to gain social influence. Furthermore, this is the first study to demonstrate social influence effects with a nonhuman, nonverbal mimicker.

Download the full paper here 

Interface and Society

Re-blogged from Networked Performance 

williams-interface_small.jpg


Interface and Society: Deadline call for works: July 1 - see call; Public Private Interface workshop: June 10-13; Mobile troops workshop: September 13-16; Conference: November 10-11 2006; Exhibition opening and performance: November 10, 2006.

In our everyday life we constantly have to cope more or less successfully with interfaces. We use the mobile phone, the mp3 player, and our laptop, in order to gain access to the digital part of our life. In recent years this situation has lead to the creation of new interdisciplinary subjects like "Interaction Design" or "Physical Computing".

We live between two worlds, our physical environment and the digital space. Technology and its digital space are our second nature and the interfaces are our points of access to this technosphere.

Since artists started working with technology they have been developing interfaces and modes of interaction. The interface itself became an artistic thematic.

The project INTERFACE and SOCIETY investigates how artists deal with the transformation of our everyday life through technical interfaces. With the rapid technological development a thoroughly critique of the interface towards society is necessary.

The role of the artist is thereby crucial. S/he has the freedom to deal with technologies and interfaces beyond functionality and usability. The project INTERFACE and SOCIETY is looking at this development with a special focus on the artistic contribution.

INTERFACE and SOCIETY is an umbrella for a range of activities throughout 2006 at Ateleir Nord in Oslo.

Jun 01, 2006

Ringxiety

Re-blogged from Smart Mobs 

Following the New York Times story on "audio illusion, phantom phone rings or ringxiety and fauxcellarm" - described as the new reason for people to either bemoan the techno-saturation of modern life or question their sanity, News.com.au via Engadget now claims the phenomenon - of falsely believing you hear your mobile phone ringing or vibrating - is so widespread it has an official name: "ringxiety" and it's really the subconscious calculating how popular we are.

David Laramie, from California's School of Professional Psychology, who coined the termed ringxiety and says he himself is a sufferer.

More on phanthom vibrations and phanthom rings in Ringtonia.

The future of computer vision

Via Smart Mobs

Will computer see as we do? MIT researchers are developing new methods to train computers to recognize people or objects in still images and in videos with 95 to 98 percent accuracy.

This research could soon be used in surveillance cameras.

Links: Primidi

MEDGADGET: Neurotechnology Provides Hope for the Paralyzed

 Via Medgadget

neurotech.jpg

 

Cyberkinetics Neurotechnology Systems Inc. is currently focused on the commercialization of two proprietary platforms for neural stimulation, neural sensing in the brain and real-time neural signal decoding technology. These unique and powerful platforms can restore sensation, communication, limb movement as well as other bodily functions.

  

pic_braingate.jpg

 

The BrainGate™ Neural Interface System is currently the subject of a pilot clinical trial being conducted under an Investigational Device Exemption (IDE) from the FDA. The system is designed to restore functionality for a limited, immobile group of severely motor-impaired individuals. It is expected that people using the BrainGate™ System will employ a personal computer as the gateway to a range of self-directed activities. These activities may extend beyond typical computer functions (e.g., communication) to include the control of objects in the environment such as a telephone, a television and lights...

 

The NeuroPort™ System is an FDA cleared medical device intended for temporary (< 30 days) recording and monitoring of brain electrical activity.

 

The NeuroPort™ System is based on Cyberkinetics' BrainGate™technology and consists of two parts, the NeuroPort™ Cortical Microelectrode Array (NeuroPort™ Array) and the NeuroPort™ Neural Signal Processor (NeuroPort™ NSP). The NeuroPort™ Array senses action potentials from individual neurons in the brain. The NeuroPort™ NSP records these high resolution signals and provides a physician with the tools to analyze them...