Nov 21, 2006
Self-regulation of slow cortical potentials: a new treatment for children with attention-deficit/hyperactivity disorder
Self-regulation of slow cortical potentials: a new treatment for children with attention-deficit/hyperactivity disorder.
Pediatrics. 2006 Nov;118(5):e1530-40
Authors: Strehl U, Leins U, Goth G, Klinger C, Hinterberger T, Birbaumer N
OBJECTIVE: We investigated the effects of self-regulation of slow cortical potentials for children with attention-deficit/hyperactivity disorder. Slow cortical potentials are slow event-related direct-current shifts of the electroencephalogram. Slow cortical potential shifts in the electrical negative direction reflect the depolarization of large cortical cell assemblies, reducing their excitation threshold. This training aims at regulation of cortical excitation thresholds considered to be impaired in children with attention-deficit/hyperactivity disorder. Electroencephalographic data from the training and the 6-month follow-up are reported, as are changes in behavior and cognition. METHOD: Twenty-three children with attention-deficit/hyperactivity disorder aged between 8 and 13 years received 30 sessions of self-regulation training of slow cortical potentials in 3 phases of 10 sessions each. Increasing and decreasing slow cortical potentials at central brain regions was fed back visually and auditorily. Transfer trials without feedback were intermixed with feedback trials to allow generalization to everyday-life situations. In addition to the neurofeedback sessions, children exercised during the third training phase to apply the self-regulation strategy while doing their homework. RESULTS: For the first time, electroencephalographic data during the course of slow cortical potential neurofeedback are reported. Measurement before and after the trials showed that children with attention-deficit/hyperactivity disorder learn to regulate negative slow cortical potentials. After training, significant improvement in behavior, attention, and IQ score was observed. The behavior ratings included Diagnostic and Statistical Manual of Mental Disorders criteria, number of problems, and social behavior at school and were conducted by parents and teachers. The cognitive variables were assessed with the Wechsler Intelligence Scale for Children and with a computerized test battery that measures several components of attention. All changes proved to be stable at 6 months' follow-up after the end of training. Clinical outcome was predicted by the ability to produce negative potential shifts in transfer sessions without feedback. CONCLUSIONS: According to the guidelines of the efficacy of treatments, the evidence of the efficacy of slow cortical potential feedback found in this study reaches level 2: "possibly efficacious." In the absence of a control group, no causal relationship between observed improvements and the ability to regulate brain activity can be made. However, it could be shown for the first time that good performance in self-regulation predicts clinical outcome. "Good performance" was defined as the ability to produce negative potential shifts in trials without feedback, because it is known that the ability to self-regulate without feedback is impaired in children and adults with attention problems. Additional research should focus on the control of unspecific effects, medication, and subtypes to confirm the assumption that slow cortical potential feedback is a viable treatment option for attention-deficit/hyperactivity disorder. Regulation of slow cortical potentials may involve similar neurobiological pathways as medical treatment. It is suggested that regulation of frontocentral negative slow cortical potentials affects the cholinergic-dopaminergic balance and allows children to adapt to task requirements more flexibly.
17:00 Posted in Biofeedback & neurofeedback | Permalink | Comments (0) | Tags: neurofeedback
ENACTIVE Network of Excellence
From the ENACTIVE project website:
The general objective of the ENACTIVE Network is the creation of a multidisciplinary research community with the aim of structuring the research on a new generation of human-computer interfaces called Enactive Interfaces.
Enactive Interfaces are related to a fundamental “interaction” concept which is not exploited by most of the existing human-computer interface technologies. As stated by the famous cognitive psychologist Jerome Bruner, the traditional interaction with the information mediated by a computer is mostly based on symbolic or iconic knowledge, and not on enactive knowledge. While in the symbolic way of learning knowledge is stored as words, mathematical symbols or other symbol systems, in the iconic stage knowledge is stored in the form of visual images, such as diagrams and illustrations that can accompany verbal information. On the other hand, enactive knowledge is a form of knowledge based on the active use of the hand for apprehension tasks.
Enactive knowledge is not simply multisensory mediated knowledge, but knowledge stored in the form of motor responses and acquired by the act of "doing". A typical example of enactive knowledge is constituted by the competence required by tasks such as typing, driving a car, dancing, playing a musical instrument, modelling objects from clay, which would be difficult to describe in an iconic or symbolic form. This type of knowledge transmission can be considered the most direct, in the sense that it is natural and intuitive, since it is based on the experience and on the perceptual responses to motor acts.
16:49 Posted in Enactive interfaces | Permalink | Comments (0) | Tags: enactive interfaces
Enactive interfaces
From Wikipedia
Enactive Interfaces are new types of Human-Computer Interface that allow to express and transmit the Enactive knowledge by integrating different sensory aspects.
The driving concept of Enactive Interfaces is then the fundamental role of motor action for storing and acquiring knowledge (action driven interfaces). Enactive Interfaces are then capable of conveying and understanding gestures of the user, in order to provide an adequate response in perceptual terms. Enactive Interfaces can be considered a new step in the development of the human-computer interaction because they are characterized by a closed loop between the natural gestures of the user (efferent component of the system) and the perceptual modalities activated (afferent component). Enactive Interfaces can be conceived to exploit this direct loop and the capability of recognising complex gestures.
The development of such interfaces requires the creation of a common vision between different research areas like Computer Vision, Haptic and Sound processing, giving more attention on the motor action aspect of interaction. An example of prototypical systems that are able to introduce Enactive Interfaces are Reactive Robots, robots that are always in contact with the human hand and are capable of interpreting the human movements and guiding the human for the completion of a manipulation task.
16:44 Posted in Enactive interfaces | Permalink | Comments (0) | Tags: enactive interfaces
Three-year residential Ph.D programme in Perceptual Robotics, Telepresence and Virtual Environments
Sant'Anna School of Advanced and Universitary Studies in Pisa offers the opportunity to undertake in-depth research on Presence and Advanced Robotics technologies for the interaction in Virtual Environments and Teleoperation.
The proposed research applications deals with the cognitive aspects of human multimodal interaction based on the act of doing and thus are strictly related to the ongoing activities carried out in the ENACTIVE Network of Excellence.
For more information go here
16:37 Posted in Research institutions & funding opportunities | Permalink | Comments (0) | Tags: funding opportunities
Postdoctoral Research Position on Human Computer Interaction
Multimedia Interaction and Smart Environments group (MISE), CREATE-NET research institute, Trento, ITALY.
Position Description
A full-time two year post-doctoral research position in Human Computer Interaction is available starting on January 2007. The research and development activity will be conducted within the IST European project SAMBA on Interactive Digital Television (iDTV) in collaboration with both European and Brazilian partners.
SAMBA is a two year International Cooperation project led by CREATE-NET and has the objective of creating a framework for allowing local communities and citizens (including low income population) to access community-oriented content and services by means of iDTV channels. SAMBA will also explore the potentials of utilization of iDTV within mobile virtual communities and its possible impact for the creation of future services and business models related to fixed and mobile iDTV market. In its activity, SAMBA will be particularly careful in analyzing Human Factors and usability issues during its whole duration and will set up real on the field experiments which will allow evaluating users' experiences and their social inclusion.
The post-doctoral researcher is expected to be involved in or responsible of the following:
- Research activities related to user centered design and user interface prototyping.
- Usability evaluation and testing
- Presentation of research results in the form of journal publications, conference presentations and project reports etc.
- International collaboration with the other European and Brazilian research groups involved in the project.
Qualifications
Successful candidates should have a Ph.D. in Human Computer Interaction or related fields with interest in iDTV. In particular, research experience in user centered design, user requirements definition, user acceptance analysis and/or on usability testing are required. Applicants must also have the ability to work independently and interact with other member of the team. Strong interpersonal skills, good writing and oral communication skills are also required.
Please send electronically your resume and publication list to:
Oscar Mayora: oscar.mayora@create-net.org
16:34 Posted in Research institutions & funding opportunities | Permalink | Comments (0) | Tags: funding opportunities
€9 billion injection to boost European ICT research
I write this post from the cold (well, not so cold as I expected) Helsinki, where I am attending the “Information Society Technology 2006” conference.
The exhibition contains quite a lot of interesting stuff this year, from portable brain-computer interface (project PRESENCCIA) to 3D television.
Beyond applications, I am learning a lot about the opportunities for funding offered by the 7th framework program, the EU's chief instrument for funding scientific research and technological development over the period 2007 to 2013.
The good news is that the EU plans to invest over €9 billion in research on information and communications technologies. This is, by far, the largest single budget item in FP7 programme.
If you want to discover more about the ICT research to be funded under the Seventh Framework Programme you may visit the dedicated website just launched on the CORDIS platform.
A further interesting initiative is Living Lab, which aims to set up a new European innovation infrastructure where users play an active role in the innovation. The initiative was launched yesterday in Espoo, Finland.
From the infosociety website:
(20/11/2006) Living labs move research out of laboratories into real-life contexts to stimulate innovation. This allows citizens to influence research, design and product development. Users are encouraged to co-operate closely with researchers, developers and designers to test ideas and prototypes.
Functioning as Public-Private Partnerships, especially at regional and local level, living labs provide some advantages over "closed labs": They stimulate new ideas, provide concrete research challenges and allow for continuous validation of research results. At a pan-European level, a large-scale network of living labs could become a strong tool for making the innovation process of industry more efficient and dynamic by stimulating the involvement of citizens of differing cultures and societal backgrounds who can provide rich feedback in context on the use and impact of the technologies being researched.
The European Network of Living Labs is launched just as a large group of experts gathers in Finland for the IST Event 2006. Several conference sessions will explore in detail the living labs approach, and offer researchers across Europe the opportunity to become involved.
The concept has already been embraced by industry and other stakeholder organisations. Concrete examples of living labs already operating include the Helsinki Living Lab (in Arabianranta), Mobile City Bremen in Germany, the Botnia Lining Lab in Sweden and Freeband in the Netherlands.
15:25 Posted in Research institutions & funding opportunities | Permalink | Comments (0) | Tags: funding opportunities
Nov 11, 2006
Computer- and robot-aided head surgery
Computer- and robot-aided head surgery.
Acta Neurochir Suppl. 2006;98:51-61
Authors: Wörn H
In this paper new methods and devices for computer and robot based head surgery are presented. A computer based planning system for CMF-surgery allows the surgeon to plan complex trajectories on the head of the patient for operations where bone segments were cut out and shifted. Different registration methods have been developed and tested. A surgical robot system for bone cutting on the head has been developed and evaluated at the patient in the operating theatre. In future, laser cutting of bones with a robot will be seen as a new powerful method for robot based surgery. A 3D augmented reality system will assist the surgeon in the future by augmenting virtual anatomical structure into the situs.
16:18 Posted in AI & robotics | Permalink | Comments (0) | Tags: robotics, cybertherapy
BCI as a tool to induce neuroplasticity
Brain-computer interface technology as a tool to augment plasticity and outcomes for neurological rehabilitation.
J Physiol. 2006 Nov 9
Authors: Dobkin BH
Brain-computer interfaces are a rehabilitation tool for tetraplegic patients that aim to improve quality of life by augmenting communication, control of the environment, and self-care. The neurobiology of both rehabilitation and BCI control depends upon learning to modify the efficacy of spared neural ensembles that represent movement, sensation, and cognition through progressive practice with feedback and reward. To serve patients, BCI systems must become safe, reliable, cosmetically acceptable, quickly mastered with minimal ongoing technical support, and highly accurate even in the face of mental distractions and the uncontrolled environment beyond a laboratory. BCI technologies may raise ethical concerns if their availability affects the decisions of patients who become locked-in with brain stem stroke or amyotrophic lateral sclerosis to be sustained with ventilator support. If BCI technology becomes flexible and affordable, volitional control of cortical signals could be employed for the rehabilitation of motor and cognitive impairments in hemiplegic or paraplegic patients by offering on-line feedback about cortical activity associated with mental practice, motor intention, and other neural recruitment strategies during progressive task-oriented practice. Clinical trials with measures of quality of life will be necessary to demonstrate the value of near-term and future BCI applications.
16:03 Posted in Brain-computer interface, Mental practice & mental simulation | Permalink | Comments (0) | Tags: brain-computer interface
Artificial gut
Via Frontal Cortex
The New York Times reports that British scientists have built an apparatus that simulates human digestion.
From the article:
Constructed from sophisticated plastics and metals able to withstand the corrosive acids and enzymes found in the human gut, the device may ultimately help in the development of super-nutrients, such as obesity-fighting foods that could fool the stomach into thinking it is full.
''There have been lots of jam-jar models of digestion before,'' said Dr. Martin Wickham of Norwich's Institute of Food Research, the artificial gut's chief designer, referring to the beakers of enzymes typically used to approximate the chemical reactions in the stomach.
Wickham's patented artificial gut is a two-part model that is slightly larger than a desktop computer. The top half consists of a funnel in which food, stomach acids and digestive enzymes are mixed. Once this hydration process is finished, the food gets ground down in a silver metal tube encased in a dark, transparent box.
Software sets the parameters of the artificial gut - how long food remains in a particular part of the stomach, predicted hormone responses at various stages, and whether it is an infant or adult gut.
(...)
With a capacity about half the size of an actual stomach, the artificial gut can ''eat'' roughly 24 ounces of food. To date, the most substantial meal it's enjoyed is vegetable soup.
''It's so realistic that it can even vomit,'' adds Wickham.
Read the full story here
14:15 Posted in Research tools | Permalink | Comments (0) | Tags: research tools
Nov 10, 2006
State of the Blogosphere
Re-blogged from Smart Mobs
Technnorati has posted the State of the Blogosphere, October, 2006. [via Joi Ito]
As of October 2006, about 100,000 new weblogs were created each day, which means that on average, there was a slight decrease quarter-over-quarter in the number of new blogs created each day.
...The total posting volume of the blogosphere has leveled off somewhat, showing about 1.3 million postings per day, which is a little lower than what we were seeing last quarter but still about double the volume of this time last year. ...
20:16 Posted in Research tools | Permalink | Comments (0) | Tags: research tools
Get paid to blog with ReviewME!
The following is a paid review
ReviewMe is a service that pays bloggers to write about advertisers’ products. It's similar to PayPerPost (an automated system that allows users to promote their web site, product, service, or company through the PayPerPost network of bloggers) but with better payouts and a focus on reviews. Actually, bloggers must disclose that the review is a paid advertisement (as I have done at the beginning of this post).
To determine the importance of a blog, ReviewMe uses an algorithm based on Alexa, Technorati and charges a different fee for each blog based on the calculation. Blogger payments range from $30 - $1,000 per post. The factors considered include the theme, estimated traffic, link popularity, and estimated RSS subscribers.
Also, advertisers can purchase posts, but they cannot require that a post is positive. The blogger can choose to write their honest opinion without fear of not being paid. The only requirement is that the review must be a minimum of 200 words.
Reviewme is giving away $25,000 today to pay bloggers to write about the service.
Here are the four simple steps you have to follow in order to get paid to review products and services on your site:
- Submit your site for inclusion into our ReviewMe publisher network.
- If approved, your site will enter our ReviewMe marketplace and clients will purchase reviews from you.
- You decide to accept the review or not.
- You will be paid $20.00 to $200.00 for each completed review that you post on your site.
The company is backed by TechCrunch-sponsor
18:15 | Permalink | Comments (0)
BrainWaves
A great catch by the always-interesting NeuroFuture:
BrainWaves is a musical performance by cultured cortical cells interfacing with multielectrode arrays. Eight electrodes recorded neural patterns that were filtered to eight speakers after being sonified by robotic and human interpretation. Sound patterns followed neural spikes and waveforms, and also extended to video, with live visualizations of the music and neural patterns in front of a mesmerized audience. See a two minute video here (still image below). Teams from two research labs designed and engineered the project; read more from collaborator Gil Weinberg.
17:05 Posted in Cyberart, Neurotechnology & neuroinformatics | Permalink | Comments (0) | Tags: neuroinformatics, cyberart
Neuroscientist uses his understanding of the human brain to advance on a popular quiz show
Via Mind Hacks
Ogi Ogas, a doctoral student in cognitive neuroscience at Cognitive and Neural Systems at Boston University, has applied techniques from cognitive psychology to win $500,000 on the show 'Who wants to be a Millionaire?'.
These techniques take advantage of well-studied psychological processes such as such as priming and the structure of associations in memory.
Read how he describes his method in an article appeared on Seed Magazine:
The first technique I drew upon was priming. The priming of a memory occurs because of the peculiar "connectionist" neural dynamics of our cortex, where memories are distributed across many regions and neurons. If we can recall any fragment of a pattern, our brains tend to automatically fill in the rest....
I used priming on my $16,000 question: "This past spring, which country first published inflammatory cartoons of the prophet Mohammed?" I did not know the answer. But I did know I had a long conversation with my friend Gena about the cartoons. So I chatted with [quiz show host] Meredith about Gena. I tried to remember where we discussed the cartoons and the way Gena flutters his hands. As I pictured how he rolls his eyes to express disdain, Gena's remark popped into my mind: "What else would you expect from Denmark?"
16:56 Posted in Research tools | Permalink | Comments (0) | Tags: research tool
Nov 09, 2006
DNART
via LiveScience (thanks to Johnatan Loroni, bioinformatics researcher)
Paul Rothemund, researcher at Caltech, has developed a new tecnique that allows to weave DNA strands into any desired two-dimensional shape or figure, which he calls "DNA origami." According to Rothemund, the technology could one day be used to construct tiny chemical factories or molecular electronics by attaching proteins and inorganic components to DNA circuit boards.
From the press release:
"The construction of custom DNA origami is so simple that the method should make it much easier for scientists from diverse fields to create and study the complex nanostructures they might want," Rothemund explains.
"A physicist, for example, might attach nano-sized semiconductor 'quantum dots' in a pattern that creates a quantum computer. A biologist might use DNA origami to take proteins which normally occur separately in nature, and organize them into a multi-enzyme factory that hands a chemical product from one enzyme machine to the next in the manner of an assembly line."
Reporting in the March 16th issue of Nature, Rothemund describes how long single strands of DNA can be folded back and forth, tracing a mazelike path, to form a scaffold that fills up the outline of any desired shape. To hold the scaffold in place, 200 or more DNA strands are designed to bind the scaffold and staple it together.
Each of the short DNA strands can act something like a pixel in a computer image, resulting in a shape that can bear a complex pattern, such as words or images. The resulting shapes and patterns are each about 100 nanometers in diameter-or about a thousand times smaller than the diameter of a human hair. The dots themselves are six nanometers in diameter. While the folding of DNA into shapes that have nothing to do with the molecule's genetic information is not a new idea, Rothemund's efforts provide a general way to quickly and easily create any shape. In the last year, Rothemund has created half a dozen shapes, including a square, a triangle, a five-pointed star, and a smiley face-each one several times more complex than any previously constructed DNA objects. "At this point, high-school students could use the design program to create whatever shape they desired,'' he says.
Once a shape has been created, adding a pattern to it is particularly easy, taking just a couple of hours for any desired pattern. As a demonstration, Rothemund has spelled out the letters "DNA," and has drawn a rough picture of a double helix, as well as a map of the western hemisphere in which one nanometer represents 200 kilometers.
Link to Live Science report on DNA art
16:34 Posted in Cyberart | Permalink | Comments (0) | Tags: cyberart
Nov 08, 2006
fMRI lie detection test raises ethical issues
Via Mind Hacks
A recent article published in The Washington Post focuses on the socio-ethical implications of the emerging neuroscience of lying. The article reports about a company called No Lie MRI Ltd that claims to use "the first and only direct measure of truth verification and lie detection in human history".
From the article:
No Lie MRI's Web site has proclaimed that the company hopes to revolutionize truth telling in America, offering "objective, scientific, mental evidence, similar to the role in which DNA biological identification is used," to everyone from the FBI, CIA and NSA to the Department of Homeland Security.No Lie is not alone. Its Massachusetts competitor, Cephos Corp., has licensed competing fMRI lie detection technology from the Medical University of South Carolina.
The boundless desire for a way to dig through deception is why political consultant John Zogby, president of Zogby International, expects the new brain scanning devices to be in widespread use in the 2008 presidential election. He can clearly see a demand to discover what voters really think of candidates - and their commercials.
Brain-scan lie detection is now reliable enough that it is starting to be admissible in court.
23:35 Posted in Research tools | Permalink | Comments (0) | Tags: research tools
Ars Virtua Artist-in-Residence (AVAIR)
Re-blogged from Networked performance
Ars Virtua Artist-in-Residence (AVAIR): Call for Proposals: Deadline November 21, 2006: Ars Virtua Gallery and New Media Center in Second Life is soliciting proposals for its artist-in-residence program. The deadline for submissions is November 21, 2006. Established and emerging artists will work within the 3d rendered environment of Second Life. Each 11-week residency will culminate in an exhibition and a community-based event. Residents will also receive a $400 stipend, training and mentorship.
Ars Virtua Artist-in-Residence (AVAIR) is an extended performance that examines what it means to reside in a place that has no physical location.
Ars Virtua presents artists with a radical alternative to "real life" galleries: 1) Since it does not physically exist artists are not limited by physics, material budgets, building codes or landlords. Their only constraints are social conventions and (malleable-extensible) software. 2) The gallery is accessible 24 hours a day to a potentially infinite number of people in every part of the world simultaneously. 3) Because of the ever evolving, flexible nature of Second Life the "audience" is a far less predictable variable than one might find a Real Life gallery. Residents will be encouraged to explore, experiment with and challenge traditional conventions of art making and distribution, value and the art market, artist and audience, space and place.
Application Process: Artists are encouraged to log in to Second Life and create an avatar BEFORE applying. Download the application requirements here: http://arsvirtua.com/residence. Finalists will be contacted for an interview. Interviews will take place from November 28-30.
23:13 Posted in Cyberart, Virtual worlds | Permalink | Comments (0) | Tags: cyberart, virtual worlds
Body as musical instrument
Dance and music go together. Intuitively, we know they have common elements, and while we cannot even begin to understand what they are or how they so perfectly complement one another, it is clear that they are both are an expression of something deep and fundamental within all human beings. Both express things that words cannot - beyond intellect, they are perhaps two of the fundamental building blocks of human expression, common to the souls of all people. Which is why when we saw this machine which links the two, we knew there was something special brewing.
The GypsyMIDI is a unique instrument for motion-capture midi control - a machine that enables a human being to become a musical instrument - well, a musical instrument controller to be exact, or a bunch of other things depending on your imagination.
Read the full post on NP
23:05 Posted in Cyberart | Permalink | Comments (0) | Tags: creativity and computers
Alert driver fatigue wrist device
Via Medgadget
A future concept designed for the AA, this flexible rubber device uses motion combined with reaction time to determine whether or not you are suffering from driver fatigue. The device comunicates with an RFID tag positioned in your car and only starts to detect whether you are tired when you are in your car. The device can be bent to fit your wrist, and has memory to stay in position, to ensure it will not fall off.
Designer: Daniel Ruffle
22:55 Posted in Wearable & mobile | Permalink | Comments (0) | Tags: wereable, mobile
Nov 07, 2006
SHOJI: Symbiotic Hosting Online Jog Instrument
From Pink Tentacle
Symbiotic Hosting Online Jog Instrument (SHOJI) is a system that monitors the feelings and behavior of the people in the room and relays the mood data to remote terminals where it is displayed as full-colored LED light.
In addition to constantly measuring the room’s environmental conditions, SHOJI terminals can detect the presence and movement of people, body temperature, and the nature of the activity in the room.
Read the full post on Pink Tentacle
23:21 Posted in Emotional computing | Permalink | Comments (0) | Tags: emotional computing
[meme.garden]
re-blogged from Networked Performance
[meme.garden] by Mary Flanagan, Daniel Howe, Chris Egert, Junming Mei, and Kay Chang [meme.garden] is an Internet service that blends software art and search tool to visualize participants' interests in prevalent streams of information, encouraging browsing and interaction between users in real time, through time. Utilizing the WordNet lexical reference system from Princeton University, [meme.garden] introduces concepts of temporality, space, and empathy into a network-oriented search tool. Participants search for words which expand contextually through the use of a lexical database. English nouns, verbs, adjectives and adverbs are organized into floating synonym "seeds," each representing one underlying lexical concept. When participants "plant" their interests, each becomes a tree that "grows" over time. Each organism's leaves are linked to related streaming RSS feeds, and by interacting with their own and other participants' trees, participants create a contextual timescape in which interests can be seen growing and changing within an environment that endures.
The [meme.garden] software was created by an eclectic team of artists and scientists: Mary Flanagan, Daniel Howe, Chris Egert, Junming Mei, and Kay Chang.
23:10 Posted in Cyberart | Permalink | Comments (0) | Tags: cyberart