Jul 18, 2006
Workshop on Emotion in HCI - London, UK
From Usability News
The topic of emotion in Human-Computer Interaction is of increasing interest to the HCI community. Since Rosalind Picard's fundamental publications on affective computing, research in this field has gained significant momentum.
Emotion research is largely grounded in psychology yet spans across numerous other disciplines. The challenge of such an interdisciplinary research area is developing a common vocabulary and research framework that a mature discipline requires. What is increasingly needed for advanced and serious work in this field is to place it on a rigorous footing, including developing theoretical fundamentals of HCI-related emotion research, understanding emotions' function in HCI, ethical and legal issues, and the practical implications and consequences for the HCI community.
The first workshop on emotion in HCI held in Edinburgh last year brought an interdisciplinary group of practitioners and researchers together for a lively exchange of ideas, discussion of common problems, and identification of domains to explore.
This year's workshop will build on the success of last year. Focus will be on discussion and joint work on selected topics. Participants will engage in developing further the themes from the first workshop in as wide an application spectrum as possible, such as internet applications, ambient intelligence, office work, control rooms, mobile computing, virtual reality, presence, and home applications.
You are cordially invited to become part of this interdisciplinary forum. This will be a very practical workshop with the participants working together to find new insights, views, ideas and solutions. We therefore invite contributions which will enrich the discussions by their innovative content, fundamental nature, or new perspective. We also encourage demos of products or prototypes related to the topic.
Topics addressed by the workshop are:
- How do applications currently make use of emotions and how could it be improved?
- What makes applications that support affective interactions successful?
- How do we know if affective interactions are successful, and how can we measure this success?
- What value might affective applications, affective systems, and affective interaction have?
- What requirements on sensing technologies are there in HCI?
- What technology is currently available for sensing affective states?
- How reliable is sensing technology?
- Are there reliable and replicable processes to include emotion in HCI design projects?
- What opportunities and risks are there in designing affective applications?
- What are the relationships between emotion, affect, personality, and engagement, and what do they mean for interactive systems design?
To become part of this discussion please submit an extended abstract of your ideas or demo description. Case studies describing current applications or prototypes are strongly encouraged, as well as presentations of products or prototypes that you have developed.
The abstract should be limited to about 800 words. Accepted contributions will be published on the workshop's homepage with the possibility to extend them to short papers of 4 pages. It is also planned to produce a special issue of a journal on the results of the workshop.
Please note that registration to the HCI conference is required in order to take part in the workshop (at least for the day of the workshop). Early bird registration deadline is 21st July.
Dates:
27 June - position paper deadline
11 July - notification of acceptance
21 July - early registration deadline
12 September - workshop
01:14 Posted in Emotional computing | Permalink | Comments (0) | Tags: emotional computing
MODIE: Modelling and Designing User Assistance in Intelligent Environments
Themes & Topics
We are interested in models, principles and methodologies, which guide the designer of an intelligent environment in the early stages of the development process, such as task and requirements analysis and conceptual design. We are looking for contributions, which will help the designer of user assistance systems to address the following questions:
- Which user activities and tasks require assistance?
- How to map an activity model into interactions with artifacts?
- How should the designer choose the best sensing and interaction technologies for a scenario?
- Which mobile or wearable personal devices can be employed?
- How should multiple users with concurrent activities be supported?
- How should the current state of the user assistance system be represented, especially when dealing with multiple tasks?
The intention of the workshop is to share experiences and perspectives on user assistance in intelligent environments from the different view-points of developers, designers, ethnographers and cognitive scientists. Each participant will give a short presentation about their contribution. The second half of the workshop will be focused on the discussion of key topics:
- How to unify the complementary concepts of public and personal devices
- How to model user activity (terminology, structure, notation)
- Suggest a terminology for intelligent Environments
- How tools can support the modelling and designing of user assistance
- What the problems of applying traditional software engineering methodologies are
- Are there principles that can be generalized for the design of IEs?
Intended Participants
We encourage researchers from the following disciplines to contribute position papers (2-6 pages) and knowledge to the discussion:
- Computer Scientists (in the fields of Mobile HCI and Intelligent Environments): contribute experiences with working prototypes, discuss technical issues.
- Designers: contribute new paradigms and concepts, discuss existing environments and current solutions to present information to the user, how might the future look like?
- Ethnographers: contribute an analysis of user activities and problems in current environments, discuss application areas for Intelligent Environments.
- Cognitive Scientists: contribute design principles for Intelligent Environments based on their knowledge about the limited resources of the human processor, discussion of pros and cons of interaction paradigms, concepts and technologies.
Workshop Format
MODIE will be a full day workshop. Each participant will give a short presentation on their position and experience in dealing with one or several of the workshop topics. It is assumed that the participants are already familiar with the position papers, which will be available as online proceedings prior to the workshop. In the afternoon, the participants will split into small groups and discuss interesting research topics. Afterwards we will present and discuss their results.
Important Dates
---------------
Submission Deadline July 10, 2006
Acceptance Notification July 13, 2006
Workshop Date September 12, 2006
01:12 Posted in Call for papers | Permalink | Comments (0) | Tags: ambient intelligence
PhD studentship on pervasive tech - Open University, Milton Keynes, UK
Applicants are welcome for a University-funded PhD studentship at the Open University in the Computing Department to work with Prof. Yvonne Rogers's new research group (who will be joining this summer) at the cutting edge of HCI and pervasive technologies. Topics for research include exploring the benefits of tangibles, physical computing, and shared displays on collaborative activities, such as learning and problem-solving. Candidate must have a background in HCI, cognitive science/psychology or computing.
Starting Date: October 2006.
01:10 Posted in Research institutions & funding opportunities | Permalink | Comments (0) | Tags: pervasive computing
A high-performance brain-computer interface
A high-performance brain-computer interface.
Nature. 2006 Jul 13;442(7099):195-8
Authors: Santhanam G, Ryu SI, Yu BM, Afshar A, Shenoy KV
Recent studies have demonstrated that monkeys and humans can use signals from the brain to guide computer cursors. Brain-computer interfaces (BCIs) may one day assist patients suffering from neurological injury or disease, but relatively low system performance remains a major obstacle. In fact, the speed and accuracy with which keys can be selected using BCIs is still far lower than for systems relying on eye movements. This is true whether BCIs use recordings from populations of individual neurons using invasive electrode techniques or electroencephalogram recordings using less- or non-invasive techniques. Here we present the design and demonstration, using electrode arrays implanted in monkey dorsal premotor cortex, of a manyfold higher performance BCI than previously reported. These results indicate that a fast and accurate key selection system, capable of operating with a range of keyboard sizes, is possible (up to 6.5 bits per second, or approximately 15 words per minute, with 96 electrodes). The highest information throughput is achieved with unprecedentedly brief neural recordings, even as recording quality degrades over time. These performance results and their implications for system design should substantially increase the clinical viability of BCIs in humans.
01:02 Posted in Brain-computer interface | Permalink | Comments (0) | Tags: brain-computer interface
TMS can improve subitizing ability
Re-blogged from Omnibrain
A joint venture of the Australian National University and the University of Sydney investigated whether repetitive transcranial magnetic stimulation, TMS, can improve a healthy person's ability to guess accurately the number of elements in a scene, the London Telegraph reported.
00:31 Posted in Brain training & cognitive enhancement | Permalink | Comments (0) | Tags: cognitive prosthetics
"TV for the brain" patented by Sony
00:21 Posted in Future interfaces | Permalink | Comments (0) | Tags: future interfaces
Definition of neuroinformatics
The website Pharmabiz provides a good definition of Neuroinformatics:
In this discipline work is focused on the integration of neuroscientific information from the level of the genome to the level of human behavior. A major goal of this new discipline is to produce digital capabilities for a web-based information management system in the form of databases and associated data management tools. The databases and software tools are being designed for the benefit of neuroscientists, behavioral scientists, clinicians and educators in an effort to better understand brain structure, function, and development. Some of the databases developed in Neuroinformatics are Surface Management System (SuMS), The fMRIDC, BrainMap, BrainInfo, X-Anat, The Brain Architecture Management System (BAMS), The Ligand Gated Ion Channel database (LGICdb), ModelDB and Probabilistic atlas and reference system for the human brain. Most of these databases are freely available and can be accessed through internet. They provide the particular information in detail at one place and help in the neuroscience research. Some of the generally used neuroinformatic software tools include GENESIS, NEURON, Catacomb, Channelab, HHsim, NEOSIM, NANS, SNNAP, etc. The data sharing in neuroscience is not the only application of neuroinformatics, it is much more. The computational modeling of ion channels, various parts of neurons, full neurons and even neural networks helps to understand the complex neural system and its working. This type of modeling greatly overlaps with system biology and also gets benefit from bioinformatics databases. In India neuroinformatics research is mainly being carrying out presently at National Brain Research Centre, Gurgaon under the department of biotechnology, government of India. The computational modeling of various processes related to neurosciences helps in understanding of brain functions in normal and various disorder states. Several efforts in this direction are also in progress.
00:14 Posted in Neurotechnology & neuroinformatics | Permalink | Comments (0)
BrainMap
BrainMap is an online database of published functional neuroimaging experiments with coordinate-based (Talairach) activation locations. The goal of BrainMap is to provide a vehicle to share methods and results of brain functional imaging studies. It is a tool to rapidly retrieve and understand studies in specific research domains, such as language, memory, attention, reasoning, emotion, and perception, and to perform meta-analyses of like studies.
BrainMap was created and developed by Peter T. Fox and Jack L. Lancaster of the Research Imaging Center of the University of Texas Health Science Center San Antonio.
00:11 Posted in Neurotechnology & neuroinformatics, Research tools | Permalink | Comments (0) | Tags: neuroinformatics
Second Geoethical Nanotechnology workshop
Re-blogged from KurzweilAI.net
The Terasem Movement announced today that its Second Geoethical Nanotechnology workshop will be held July 20, 2006 in Lincoln, Vermont. The public is invited to participate via conference call.The workshop will explore the ethics of neuronanotechnology and future mind-machine interfaces, including preservation of consciousness, implications for a future in which human and digital species merge, and dispersion of consciousness to the cosmos, featuring leading scientists and other experts in these areas.
The workshop proceedings are open to the public via real-time conference call and will be archived online for free public access. The public is invited to call a toll-free conference-call dial-in line from 9:00 a.m. - 6:00 p.m. ET. Callers from the continental US and Canada can dial 1-800-967-7135; other countries: (00+1) 719-457-2626.
Each workshop presentation is designed for a 15-20 minute delivery, followed by a 20 minute formal question and answer period, during which time questions from the worldwide audience will be invited. Presentations will also be available on the workshop's website
00:05 Posted in Brain training & cognitive enhancement, Brain-computer interface, Neurotechnology & neuroinformatics | Permalink | Comments (0) | Tags: neurotechnology
Novel BCI device will allow people to search through images faster
Via KurzweilAI.net
Researchers at Columbia University are combining the processing power of the human brain with computer vision to develop a novel device that will allow people to search through images ten times faster than they can on their own.
The "cortically coupled computer vision system," known as C3 Vision, is the brainchild of professor Paul Sajda, director of the Laboratory for Intelligent Imaging and Neural Computing at Columbia University. He received a one-year, $758,000 grant from Darpa for the project in late 2005.
The brain emits a signal as soon as it sees something interesting, and that "aha" signal can be detected by an electroencephalogram, or EEG cap. While users sift through streaming images or video footage, the technology tags the images that elicit a signal, and ranks them in order of the strength of the neural signatures. Afterwards, the user can examine only the information that their brains identified as important, instead of wading through thousands of images.
Read the full story on Wired
00:00 Posted in Brain training & cognitive enhancement, Brain-computer interface | Permalink | Comments (0) | Tags: brain-computer interface
Jul 17, 2006
Cellphones could soon have a tactile display
Via New Scientist
According to New Scientist, haptic devices (i.e. devices that stimulate our sense of touch) will add a new dimension to communications, entertainment and computer control for everybody, and for people with visual impairment they promise to transform everyday life. One proposed device consists of a headband that imprints the shape of objects in front of it onto the wearer's forehead, something that visually impaired people could find a great help when navigating though a cluttered environment. Moreover, cellphones could soon have a tactile "display", for example, and portable gadgets containing a GPSdevice will be able to nudge you towards your desired destination.
Read the full article
23:55 Posted in Wearable & mobile | Permalink | Comments (0) | Tags: virtual reality
Computers learn common sense
Via The Engineer, July 11, 2006
BBN Technologies has been awarded $5.5 million in funding from the Defense Advanced Research Projects Agency (DARPA) for the first phase of "Integrated Learner," which will learn plans or processes after being shown a single example.
The goal is to combine specialised domain knowledge with common sense knowledge to create a reasoning system that learns as well as a person and can be applied to a variety of complex tasks. Such a system will significantly expand the kinds of tasks that a computer can learn.
Read the full article
23:49 Posted in AI & robotics | Permalink | Comments (0) | Tags: artificial intelligence
Video games can improve performance in vision tasks
Three years ago, C. Shawn Green and Daphne Bavelier of the University of Rochester conducted a study in which they found that avid video game players were better at several different visual tasks compared to non-gamers ("Action Video Game Modifies Visual Attention," Nature, 2003). In particular, the study showed that video game players had increased visual attention capacity on a flanker distractor task, as well as improved ability to subitize (subitizing is the ability to enumerate a small array of objects without overtly counting each item).
The same authors have now completed a follow-up study that has been released in the current issue of Cognition. The new experiment's findings suggests that the data previously interpreted as supporting an increase in subitizing may actually reflect the deployment of a serial counting strategy on behalf of the video-game players.
23:35 Posted in Brain training & cognitive enhancement | Permalink | Comments (0)
BrainGate
In a study published in the journal Nature this week, researchers from Boston-based Cyberkinetics Neurotechnology Systems describe how two paralyzed patients with a surgically implanted neural device successfully controlled a computer and, in one case, a robotic arm, using only their thoughts.
These findings include the ability to voluntarily generate signals in the dorsal pre-motor cortex, the area of the brain responsible for the planning, selection and execution of movement. While accuracy levels have been previously published, the current study reveals unprecedented speed in retrieving and interpreting the neural signals that can be applied to the operation of external devices that require fast, accurate selections, such as typing.
The brain-computer interface used in the study consists of an internal sensor to detect brain cell activity and external processors that convert these brain signals into a computer-mediated output under the person's own control.
According to John Donoghue, Chief Scientific Officer of Cyberkinetics, and a co-inventor of the BrainGate technology, "The results achieved from this study demonstrate the utility and versatility of Cyberkinetics' neural sensing technology to achieve very rapid, accurate decoding - about as fast as humans ordinarily make decisions to move when asked. The contributions of complementary research with our electrode and data acquisition technology should enhance our development of the BrainGate System in its ability to, one day, enable those with severe paralysis or other neurological conditions to lead more independent lives."
See video here
23:04 Posted in Brain-computer interface | Permalink | Comments (0)
Jul 11, 2006
AIIM: call for papers on Wearable Systems for Healthcare Applications
Source: Artificial Intelligence in Medicine Journal
Advances in body worn sensors, mobile computing, and ubiquitous networking have lead to a wide range of new applications in areas related to healthcare. This includes intelligent health monitoring, assisted living systems, novel, intelligent information delivery devices for medical personnel, and new assets and process management methods for hospitals. As divers as the above applications are, most of them have one thing in common: reliance on a degree of system intelligence. Such intelligence is needed to adapt the system functionality to the specific situation that the user is in, simplify the user interface, allow relevant data to be extracted from physiological sensors despite motion artifacts ant the use of simple sensors, or provide altogether new types of functionality related to the user’s environment. While the work on wearable systems mostly takes place outside the classical AI community, it strongly relies on methods from AI such as pattern recognition, Bayesian modeling and time series analysis. The aim of this special issue is to bring this new field to the attention of the medical AI community through a collection of outstanding research articles. Relevant topics will include but not be limited to:
1. Novel body worn sensors and sensor systems enabling intelligent health care applications
2. Novel signal processing methods relevant to intelligent wearable applications in healthcare
3. Activity and context recognition methods relevant to healthcare applications
4. Applications of intelligent wearable systems in health care related areas.
The focus of the issue is on high quality, not yet published research work. However outstanding overview articles will also be considered. All submissions will undergo a strict peer review process. In general the acceptance rate of AIIM is around 30%.
Submission and Relevant Dates:
Authors are invited to submit their contributions of about 20 pages (1.5 lines spacing) in pdf format to paul.lukowicz@umit.at . The relevant dates for the special issue are:
1. Oct 31st 2006: submission deadline
2. Feb 1st 2007: notification of acceptance
3. Apr 1st 2007: Final versions due
Guest Editor:
Paul Lukowicz Chair for Embedded Systems University of Passau, Germany
17:40 Posted in Call for papers | Permalink | Comments (0)
Jul 10, 2006
ECCE 13 - Zurich, Switzerland
KEYNOTE SPEAKERS
Marc Bourgois, EUROCONTROL Experimental Centre
Stefana Broadbent, Swisscom Innovations
TECHNICAL CHAIR
Erik Hollnagel
CONFERENCE CO-CHAIRS
Antonio Rizzo, Gudela Grote, William Wong
INTERNATIONAL ORGANISING COMMITTEE
Gudela Grote, Antonio Rizzo, William Wong, Peter Wright, Willem-Paul Brinkman, Tjerk Van Der Schaaf. Erik Hollnagel, Jose Canas, Sebastiano Bagnara, Vincent Grosjean, Victor Kaptelinin, Clive Warren.
21:13 Posted in Positive Technology events | Permalink | Comments (0) | Tags: human-computer interaction
Convivio 2006
Via Usability News
Convivio 2006 is an Interaction Design summer school sponsored by the Convivio Network, an international and interdisciplinary consortium of designers and researchers that provides an infrastructure supporting the development of "convivial technologies" - ICT products, systems and services that enhance the quality of life and human interaction.
The focus of the upcoming (August 14 - 25, 2006) summer school session in Edinburgh, Scotland is "Visions, Boundaries and Transformations in Extending or Replacing Human Capacities."
The intensive 2-week summer school combines lectures, user research, atelier work, interaction design methods and social activities. This session follows previous summers in Ivrea, Italy; Rome, Italy; Split, Croatia; and Timisoara, Romania.
Masters and PhD students as well as new professionals are encouraged to apply here
Applications are due June 10, 2006.
To learn more about the summer school go here
21:11 Posted in Positive Technology events | Permalink | Comments (0) | Tags: augmented cognition
Jul 08, 2006
Emotionally aware computer
According to The Herald, Cambridge professor Peter Robinson has developed a prototype of an “emotionally aware computer” that uses a camera to capture images of the user’s face, then determines facial expressions, and infers the user’s mood.
From the report:
‘Imagine a computer that could pick the right emotional moment to sell you something,” says Peter Robinson, of Cambridge University. “Imagine a future where websites and mobile phones could read our mind and react to our moods.”
It sounds like Orwellian fiction but this week, Robinson, a professor of computer technology, unveiled a prototype for just such a “mind-reading” machine. The first emotionally aware computer is on trial at the Royal Society Festival of Science in London…Once the software is perfected, Robinson believes it will revolutionise marketing. Cameras will be on computer monitors in internet cafes and behind telescreens in bars and waiting rooms. Computers will process our image and respond with adverts that connect to how we’re feeling.
20:43 Posted in Emotional computing | Permalink | Comments (0) | Tags: emotional computing
Cognitive Computing symposium at IBM
Via Neurodudes
Recently, the Almaden Research center, part of IBM research, invited some provocative speakers for a dicussion on the topic of “Cognitive Computing”.
Powerpoint presentations and videos of the event are available online.
From the synopsis:
The 2006 Almaden Institute will focus on the theme of “Cognitive Computing” and will examine scientific and technological issues around the quest to understand how the human brain works. We will examine approaches to understanding cognition that unify neurological, biological, psychological, mathematical, computational, and information-theoretic insights. We focus on the search for global, top-down theories of cognition that are consistent with known bottom-up, neurobiological facts and serve to explain a broad range of observed cognitive phenomena. The ultimate goal is to understand how and when can we mechanize cognition.
Confirmed speakers include Toby Berger (Cornell), Gerald Edelman (The Neurosciences Institute), Joaquin Fuster (UCLA), Jeff Hawkins (Palm/Numenta), Robert Hecht-Nielsen (UCSD), Christof Koch (CalTech), Henry Markram (EPFL/BlueBrain), V. S. Ramachandran (UCSD), John Searle (UC Berkeley) and Leslie Valiant (Harvard). Confirmed panelists include: James Albus (NIST), Theodore Berger (USC), Kwabena Boahen (Stanford), Ralph Linsker (IBM), and Jerry Swartz (The Swartz Foundation).
20:38 Posted in Brain training & cognitive enhancement | Permalink | Comments (0) | Tags: cognitive computing
Jul 06, 2006
Biofeedback for neuromotor rehabilitation
Recent developments of biofeedback for neuromotor rehabilitation.
J Neuroengineering Rehabil. 2006 Jun 21;3(1):11
Authors: Huang H, Wolf SL, He J
ABSTRACT: The original use of biofeedback to train single muscle activity in static positions or movement unrelated to function did not correlate well to motor function improvements in patients with central nervous system injuries. The concept of task-oriented repetitive training suggests that biofeedback therapy should be delivered during functionally related dynamic movement to optimize motor function improvement. Current, advanced technologies facilitate the design of novel biofeedback systems that possess diverse parameters, advanced cue display, and sophisticated control systems for use in task-oriented biofeedback. In light of these advancements, this article: (1) reviews early biofeedback studies and their conclusions; (2) presents recent developments in biofeedback technologies and their applications to task-oriented biofeedback interventions; and (3) discusses considerations regarding the therapeutic system design and the clinical application of task-oriented biofeedback therapy. This review should provide a framework to further broaden the application of task-oriented biofeedback therapy in neuromotor rehabilitation.
00:20 Posted in Biofeedback & neurofeedback | Permalink | Comments (0) | Tags: biofeedback, neurofeedback