Ok

By continuing your visit to this site, you accept the use of cookies. These ensure the smooth running of our services. Learn more.

Jul 19, 2006

BACS project


BACS (Bayesian Approach to Cognitive Systems), is an Integrated Project under the 6th Framework Program of the European Commission which has been allocated EUR 7.5 million in funding.
 
The BACS project brings together researchers and commercial companies working on artificial perception systems potentially capable of dealing with complex tasks in everyday settings.
 
From the project's website:
 
Contemporary robots and other cognitive artifacts are not yet ready to autonomously operate in complex real world environments. One of the major reasons for this failure in creating cognitive situated systems is the difficulty in the handling of incomplete knowledge and uncertainty.
 
bacs_001

 

By taking up inspiration from the brains of mammals, including humans, the BACS project will investigate and apply Bayesian models and approaches in order to develop artificial cognitive systems that can carry out complex tasks in real world environments. The Bayesian approach will be used to model different levels of brain function within a coherent framework, from neural functions up to complex behaviors. The Bayesian models will be validated and adapted as necessary according to neuro-physiological data from rats and humans and through psychophysical experiments on humans. The Bayesian approach will also be used to develop four artificial cognitive systems concerned with (i) autonomous navigation, (ii) multi-modal perception and reconstruction of the environment, (iii) semantic facial motion tracking, and (iv) human body motion recognition and behavior analysis. The conducted research shall result in a consistent Bayesian framework offering enhanced tools for probabilistic reasoning in complex real world situations. The performance will be demonstrated through its applications to driver assistant systems and 3D mapping, both very complex real world tasks.

 

Brain box

BBC News, july 17, 2006

BBC reports that researchers from University of Manchester are developing a new biologically-inspired computer, which mimics the complex interactions between brain neurons.

The computer will be designed with the aim of modelling large numbers of neurons in real time and to track patterns of neural spikes as they occur in the brain. It will be built using large numbers of simple microprocessors designed to interact like the networks of neurons found in the brain. The aim will be to place dozens of microprocessors on single silicon chip reducing the cost and power consumption of the computer.

 

Read the original article

Jul 18, 2006

The Significance of Sigma Neurofeedback Training on Sleep Spindles and Aspects of Declarative Memory

The Significance of Sigma Neurofeedback Training on Sleep Spindles and Aspects of Declarative Memory.

Appl Psychophysiol Biofeedback. 2006 Jul 15;

Authors: Berner I, Schabus M, Wienerroither T, Klimesch W

The functional significance of sleep spindles for overnight memory consolidation and general learning aptitude as well as the effect of four 10-minute sessions of spindle frequency (11.6-16 Hz, sigma) neurofeedback-training on subsequent sleep spindle activity and overnight performance change was investigated. Before sleep, subjects were trained on a paired-associate word list task after having received either neurofeedback training (NFT) or pseudofeedback training (PFT).Although NFT had no significant impact on subsequent spindle activity and behavioral outcomes, there was a trend for enhanced sigma band-power during NREM (stage 2 to 4) sleep after NFT as compared to PFT. Furthermore, a significant positive correlation between spindle activity during slow wave sleep (in the first night half) and overall memory performance was revealed. The results support the view that the considerable inter-individual variance in sleep spindle activity can at least be partly explained by differences in the ability to acquire new declarative information.We conclude that the short NFT before sleep was not sufficient to efficiently enhance phasic spindle activity and/or to influence memory processing. NFT was, however, successful in increasing sigma power, presumably because sigma NFT effects become more easily evident in actually trained frequency bands than in associated phasic spindle activity.

Workshop on Emotion in HCI - London, UK

From Usability News 

The topic of emotion in Human-Computer Interaction is of increasing interest to the HCI community. Since Rosalind Picard's fundamental publications on affective computing, research in this field has gained significant momentum.

Emotion research is largely grounded in psychology yet spans across numerous other disciplines. The challenge of such an interdisciplinary research area is developing a common vocabulary and research framework that a mature discipline requires. What is increasingly needed for advanced and serious work in this field is to place it on a rigorous footing, including developing theoretical fundamentals of HCI-related emotion research, understanding emotions' function in HCI, ethical and legal issues, and the practical implications and consequences for the HCI community.

The first workshop on emotion in HCI held in Edinburgh last year brought an interdisciplinary group of practitioners and researchers together for a lively exchange of ideas, discussion of common problems, and identification of domains to explore.

This year's workshop will build on the success of last year. Focus will be on discussion and joint work on selected topics. Participants will engage in developing further the themes from the first workshop in as wide an application spectrum as possible, such as internet applications, ambient intelligence, office work, control rooms, mobile computing, virtual reality, presence, and home applications.

You are cordially invited to become part of this interdisciplinary forum. This will be a very practical workshop with the participants working together to find new insights, views, ideas and solutions. We therefore invite contributions which will enrich the discussions by their innovative content, fundamental nature, or new perspective. We also encourage demos of products or prototypes related to the topic.
Topics addressed by the workshop are:
- How do applications currently make use of emotions and how could it be improved?
- What makes applications that support affective interactions successful?
- How do we know if affective interactions are successful, and how can we measure this success?
- What value might affective applications, affective systems, and affective interaction have?
- What requirements on sensing technologies are there in HCI?
- What technology is currently available for sensing affective states?
- How reliable is sensing technology?
- Are there reliable and replicable processes to include emotion in HCI design projects?
- What opportunities and risks are there in designing affective applications?
- What are the relationships between emotion, affect, personality, and engagement, and what do they mean for interactive systems design?

To become part of this discussion please submit an extended abstract of your ideas or demo description. Case studies describing current applications or prototypes are strongly encouraged, as well as presentations of products or prototypes that you have developed.


The abstract should be limited to about 800 words. Accepted contributions will be published on the workshop's homepage with the possibility to extend them to short papers of 4 pages. It is also planned to produce a special issue of a journal on the results of the workshop.

Please note that registration to the HCI conference is required in order to take part in the workshop (at least for the day of the workshop). Early bird registration deadline is 21st July.

Dates:


27 June - position paper deadline
11 July - notification of acceptance
21 July - early registration deadline
12 September - workshop

MODIE: Modelling and Designing User Assistance in Intelligent Environments

Re-blogged from Usability News
 
Ubicomp research continually develops novel interaction techniques, sensing technologies, and new ways of presenting personalized information to the user. Gradually, companies operating in environments such as airports, museums or even shopping malls are becoming aware of the potential benefits in letting such technologies assist their users and customers. Intelligent environments are predicted to aid their users in pursuing their activities, such as wayfinding or shopping, through the situated presentation of personalized information. However, due to the large design space that ranges from wearable computing to public displays, the conceptual and technological choices pose new challenges to the designer of such user-assistance systems.

Themes & Topics

We are interested in models, principles and methodologies, which guide the designer of an intelligent environment in the early stages of the development process, such as task and requirements analysis and conceptual design. We are looking for contributions, which will help the designer of user assistance systems to address the following questions:

- Which user activities and tasks require assistance?
- How to map an activity model into interactions with artifacts?
- How should the designer choose the best sensing and interaction technologies for a scenario?
- Which mobile or wearable personal devices can be employed?
- How should multiple users with concurrent activities be supported?
- How should the current state of the user assistance system be represented, especially when dealing with multiple tasks?

The intention of the workshop is to share experiences and perspectives on user assistance in intelligent environments from the different view-points of developers, designers, ethnographers and cognitive scientists. Each participant will give a short presentation about their contribution. The second half of the workshop will be focused on the discussion of key topics:

- How to unify the complementary concepts of public and personal devices
- How to model user activity (terminology, structure, notation)
- Suggest a terminology for intelligent Environments
- How tools can support the modelling and designing of user assistance
- What the problems of applying traditional software engineering methodologies are
- Are there principles that can be generalized for the design of IEs?

Intended Participants

We encourage researchers from the following disciplines to contribute position papers (2-6 pages) and knowledge to the discussion:

  • Computer Scientists (in the fields of Mobile HCI and Intelligent Environments): contribute experiences with working prototypes, discuss technical issues.
  • Designers: contribute new paradigms and concepts, discuss existing environments and current solutions to present information to the user, how might the future look like?
  • Ethnographers: contribute an analysis of user activities and problems in current environments, discuss application areas for Intelligent Environments.
  • Cognitive Scientists: contribute design principles for Intelligent Environments based on their knowledge about the limited resources of the human processor, discussion of pros and cons of interaction paradigms, concepts and technologies.

Workshop Format

MODIE will be a full day workshop. Each participant will give a short presentation on their position and experience in dealing with one or several of the workshop topics. It is assumed that the participants are already familiar with the position papers, which will be available as online proceedings prior to the workshop. In the afternoon, the participants will split into small groups and discuss interesting research topics. Afterwards we will present and discuss their results.


Important Dates
---------------
Submission Deadline July 10, 2006
Acceptance Notification July 13, 2006
Workshop Date September 12, 2006

PhD studentship on pervasive tech - Open University, Milton Keynes, UK

 
Deadine: 1 September 2006

Applicants are welcome for a University-funded PhD studentship at the Open University in the Computing Department to work with Prof. Yvonne Rogers's new research group (who will be joining this summer) at the cutting edge of HCI and pervasive technologies. Topics for research include exploring the benefits of tangibles, physical computing, and shared displays on collaborative activities, such as learning and problem-solving. Candidate must have a background in HCI, cognitive science/psychology or computing.

Starting Date: October 2006.
 
Please send CV and informal inquiries to yrogers@indiana.edu

A high-performance brain-computer interface

A high-performance brain-computer interface.

Nature. 2006 Jul 13;442(7099):195-8

Authors: Santhanam G, Ryu SI, Yu BM, Afshar A, Shenoy KV

Recent studies have demonstrated that monkeys and humans can use signals from the brain to guide computer cursors. Brain-computer interfaces (BCIs) may one day assist patients suffering from neurological injury or disease, but relatively low system performance remains a major obstacle. In fact, the speed and accuracy with which keys can be selected using BCIs is still far lower than for systems relying on eye movements. This is true whether BCIs use recordings from populations of individual neurons using invasive electrode techniques or electroencephalogram recordings using less- or non-invasive techniques. Here we present the design and demonstration, using electrode arrays implanted in monkey dorsal premotor cortex, of a manyfold higher performance BCI than previously reported. These results indicate that a fast and accurate key selection system, capable of operating with a range of keyboard sizes, is possible (up to 6.5 bits per second, or approximately 15 words per minute, with 96 electrodes). The highest information throughput is achieved with unprecedentedly brief neural recordings, even as recording quality degrades over time. These performance results and their implications for system design should substantially increase the clinical viability of BCIs in humans.

TMS can improve subitizing ability

Re-blogged from Omnibrain 

A joint venture of the Australian National University and the University of Sydney investigated whether repetitive transcranial magnetic stimulation, TMS, can improve a healthy person's ability to guess accurately the number of elements in a scene, the London Telegraph reported.

medium_tms_iso2.jpg

The study was published this month in the journal Perception.



"TV for the brain" patented by Sony

Via Omnibrain 
 
medium_61200618530.jpg
 
 
Two recent Sony patents (#6,536,440 and #6,729,337) titled "Method and System for Generating Sensory Data onto the Human Neural Cortex" and a patent application (#20040267118) titled "Scanning Method for Applying Ultrasonic Acoustic Data to the Human Neural Cortex." describe a noninvasive way to create sensory perceptions across the neural cortex. For example, "imagery captured from a video camera is converted into neural timing difference data" and scanned across the brain as "pulsed ultrasonic signals that modify the firing rate of the neural tissue." In this manner, "sensory experiences arise from the differences in neural firing times."

Definition of neuroinformatics

The website Pharmabiz provides a good definition of Neuroinformatics:

In this discipline work is focused on the integration of neuroscientific information from the level of the genome to the level of human behavior. A major goal of this new discipline is to produce digital capabilities for a web-based information management system in the form of databases and associated data management tools. The databases and software tools are being designed for the benefit of neuroscientists, behavioral scientists, clinicians and educators in an effort to better understand brain structure, function, and development. Some of the databases developed in Neuroinformatics are Surface Management System (SuMS), The fMRIDC, BrainMap, BrainInfo, X-Anat, The Brain Architecture Management System (BAMS), The Ligand Gated Ion Channel database (LGICdb), ModelDB and Probabilistic atlas and reference system for the human brain. Most of these databases are freely available and can be accessed through internet. They provide the particular information in detail at one place and help in the neuroscience research. Some of the generally used neuroinformatic software tools include GENESIS, NEURON, Catacomb, Channelab, HHsim, NEOSIM, NANS, SNNAP, etc. The data sharing in neuroscience is not the only application of neuroinformatics, it is much more. The computational modeling of ion channels, various parts of neurons, full neurons and even neural networks helps to understand the complex neural system and its working. This type of modeling greatly overlaps with system biology and also gets benefit from bioinformatics databases. In India neuroinformatics research is mainly being carrying out presently at National Brain Research Centre, Gurgaon under the department of biotechnology, government of India. The computational modeling of various processes related to neurosciences helps in understanding of brain functions in normal and various disorder states. Several efforts in this direction are also in progress.

BrainMap

BrainMap is an online database of published functional neuroimaging experiments with coordinate-based (Talairach) activation locations. The goal of BrainMap is to provide a vehicle to share methods and results of brain functional imaging studies. It is a tool to rapidly retrieve and understand studies in specific research domains, such as language, memory, attention, reasoning, emotion, and perception, and to perform meta-analyses of like studies. 

BrainMap was created and developed by Peter T. Fox and Jack L. Lancaster of the Research Imaging Center of the University of Texas Health Science Center San Antonio.

Second Geoethical Nanotechnology workshop

Re-blogged from KurzweilAI.net

The Terasem Movement announced today that its Second Geoethical Nanotechnology workshop will be held July 20, 2006 in Lincoln, Vermont. The public is invited to participate via conference call.

The workshop will explore the ethics of neuronanotechnology and future mind-machine interfaces, including preservation of consciousness, implications for a future in which human and digital species merge, and dispersion of consciousness to the cosmos, featuring leading scientists and other experts in these areas.

The workshop proceedings are open to the public via real-time conference call and will be archived online for free public access. The public is invited to call a toll-free conference-call dial-in line from 9:00 a.m. - 6:00 p.m. ET. Callers from the continental US and Canada can dial 1-800-967-7135; other countries: (00+1) 719-457-2626.

Each workshop presentation is designed for a 15-20 minute delivery, followed by a 20 minute formal question and answer period, during which time questions from the worldwide audience will be invited. Presentations will also be available on the workshop's website 

Novel BCI device will allow people to search through images faster

Via KurzweilAI.net 

Researchers at Columbia University are combining the processing power of the human brain with computer vision to develop a novel device that will allow people to search through images ten times faster than they can on their own. 

The "cortically coupled computer vision system," known as C3 Vision, is the brainchild of professor Paul Sajda, director of the Laboratory for Intelligent Imaging and Neural Computing at Columbia University. He received a one-year, $758,000 grant from Darpa for the project in late 2005.

The brain emits a signal as soon as it sees something interesting, and that "aha" signal can be detected by an electroencephalogram, or EEG cap. While users sift through streaming images or video footage, the technology tags the images that elicit a signal, and ranks them in order of the strength of the neural signatures. Afterwards, the user can examine only the information that their brains identified as important, instead of wading through thousands of images.

Read the full story on Wired 

Jul 17, 2006

Cellphones could soon have a tactile display

Via New Scientist 

According to New Scientist, haptic devices (i.e. devices that stimulate our sense of touch) will add a new dimension to communications, entertainment and computer control for everybody, and for people with visual impairment they promise to transform everyday life. One proposed device consists of a headband that imprints the shape of objects in front of it onto the wearer's forehead, something that visually impaired people could find a great help when navigating though a cluttered environment. Moreover, cellphones could soon have a tactile "display", for example, and portable gadgets containing a GPSdevice will be able to nudge you towards your desired destination.

Read the full article 

Computers learn common sense

Via The Engineer, July 11, 2006

BBN Technologies has been awarded $5.5 million in funding from the Defense Advanced Research Projects Agency (DARPA) for the first phase of "Integrated Learner," which will learn plans or processes after being shown a single example.

The goal is to combine specialised domain knowledge with common sense knowledge to create a reasoning system that learns as well as a person and can be applied to a variety of complex tasks. Such a system will significantly expand the kinds of tasks that a computer can learn.

Read the full article 

Video games can improve performance in vision tasks

Via Developing Intelligence 

Three years ago, C. Shawn Green and Daphne Bavelier of the University of Rochester conducted a study in which they found that avid video game players were better at several different visual tasks compared to non-gamers ("Action Video Game Modifies Visual Attention," Nature, 2003). In particular, the study showed that video game players had increased visual attention capacity on a flanker distractor task, as well as improved ability to subitize (subitizing is the ability to enumerate a small array of objects without overtly counting each item).

The same authors have now completed a follow-up study that has been released in the current issue of Cognition. The new experiment's findings suggests that the data previously interpreted as supporting an increase in subitizing may actually reflect the deployment of a serial counting strategy on behalf of the video-game players.

BrainGate

In a study published in the journal Nature this week, researchers from Boston-based Cyberkinetics Neurotechnology Systems describe how two paralyzed patients with a surgically implanted neural device successfully controlled a computer and, in one case, a robotic arm, using only their thoughts. 

These findings include the ability to voluntarily generate signals in the dorsal pre-motor cortex, the area of the brain responsible for the planning, selection and execution of movement. While accuracy levels have been previously published, the current study reveals unprecedented speed in retrieving and interpreting the neural signals that can be applied to the operation of external devices that require fast, accurate selections, such as typing.

The brain-computer interface used in the study consists of an internal sensor to detect brain cell activity and external processors that convert these brain signals into a computer-mediated output under the person's own control.

medium_pic_coretechnology1.jpg

 

According to John Donoghue, Chief Scientific Officer of Cyberkinetics, and a co-inventor of the BrainGate technology, "The results achieved from this study demonstrate the utility and versatility of Cyberkinetics' neural sensing technology to achieve very rapid, accurate decoding - about as fast as humans ordinarily make decisions to move when asked. The contributions of complementary research with our electrode and data acquisition technology should enhance our development of the BrainGate System in its ability to, one day, enable those with severe paralysis or other neurological conditions to lead more independent lives."

See video here 

Jul 11, 2006

AIIM: call for papers on Wearable Systems for Healthcare Applications

Source: Artificial Intelligence in Medicine Journal

Advances in body worn sensors, mobile computing, and ubiquitous networking have lead to a wide range of new applications in areas related to healthcare. This includes intelligent health monitoring, assisted living systems, novel, intelligent information delivery devices for medical personnel, and new assets and process management methods for hospitals. As divers as the above applications are, most of them have one thing in common: reliance on a degree of system intelligence. Such intelligence is needed to adapt the system functionality to the specific situation that the user is in, simplify the user interface, allow relevant data to be extracted from physiological sensors despite motion artifacts ant the use of simple sensors, or provide altogether new types of functionality related to the user’s environment. While the work on wearable systems mostly takes place outside the classical AI community, it strongly relies on methods from AI such as pattern recognition, Bayesian modeling and time series analysis. The aim of this special issue is to bring this new field to the attention of the medical AI community through a collection of outstanding research articles. Relevant topics will include but not be limited to:

1. Novel body worn sensors and sensor systems enabling intelligent health care applications
2. Novel signal processing methods relevant to intelligent wearable applications in healthcare
3. Activity and context recognition methods relevant to healthcare applications
4. Applications of intelligent wearable systems in health care related areas.

The focus of the issue is on high quality, not yet published research work. However outstanding overview articles will also be considered. All submissions will undergo a strict peer review process. In general the acceptance rate of AIIM is around 30%.

Submission and Relevant Dates:

Authors are invited to submit their contributions of about 20 pages (1.5 lines spacing) in pdf format to paul.lukowicz@umit.at . The relevant dates for the special issue are:
1. Oct 31st 2006: submission deadline
2. Feb 1st 2007: notification of acceptance
3. Apr 1st 2007: Final versions due

Guest Editor:
Paul Lukowicz Chair for Embedded Systems University of Passau, Germany

17:40 Posted in Call for papers | Permalink | Comments (0)

Jul 10, 2006

ECCE 13 - Zurich, Switzerland

 
Event Date: 20 September 2006 to 22 September 2006
Early bird registration is now possible for ECCE-13 till 26 August 2006.
 
The main theme of ECCE-13 is trust and control in complex socio-technical systems, including of course also single human-computer systems within larger systems. The horizons of cognitive ergonomics are expanding. With distributed and highly-interconnected systems, the control of these systems becomes ever more demanding. Can the controllability of systems still be secured in technology design? Does trust have to (partially) replace control and what consequences does that have on the distribution of responsibility for the correct and safe functioning of socio-technical systems? How can the coordination requirements and the management of uncertainty in systems with multiple human and artificial actors be better supported? The conference seeks to encourage dialogue among the diverse disciplines contributing to studies of the psychological, social and cultural aspects of technology use or technology design.

KEYNOTE SPEAKERS
Marc Bourgois, EUROCONTROL Experimental Centre
Stefana Broadbent, Swisscom Innovations

TECHNICAL CHAIR
Erik Hollnagel

CONFERENCE CO-CHAIRS
Antonio Rizzo, Gudela Grote, William Wong

INTERNATIONAL ORGANISING COMMITTEE
Gudela Grote, Antonio Rizzo, William Wong, Peter Wright, Willem-Paul Brinkman, Tjerk Van Der Schaaf. Erik Hollnagel, Jose Canas, Sebastiano Bagnara, Vincent Grosjean, Victor Kaptelinin, Clive Warren.

Convivio 2006

Via Usability News 

medium_convivio_1_.JPG

 

Convivio 2006 is an Interaction Design summer school sponsored by the Convivio Network, an international and interdisciplinary consortium of designers and researchers that provides an infrastructure supporting the development of "convivial technologies" - ICT products, systems and services that enhance the quality of life and human interaction.

The focus of the upcoming (August 14 - 25, 2006) summer school session in Edinburgh, Scotland is "Visions, Boundaries and Transformations in Extending or Replacing Human Capacities."

The intensive 2-week summer school combines lectures, user research, atelier work, interaction design methods and social activities. This session follows previous summers in Ivrea, Italy; Rome, Italy; Split, Croatia; and Timisoara, Romania.

Masters and PhD students as well as new professionals are encouraged to apply here

Applications are due June 10, 2006.

To learn more about the summer school go here