Jul 08, 2006
According to The Herald, Cambridge professor Peter Robinson has developed a prototype of an “emotionally aware computer” that uses a camera to capture images of the user’s face, then determines facial expressions, and infers the user’s mood.
From the report:
‘Imagine a computer that could pick the right emotional moment to sell you something,” says Peter Robinson, of Cambridge University. “Imagine a future where websites and mobile phones could read our mind and react to our moods.”
It sounds like Orwellian fiction but this week, Robinson, a professor of computer technology, unveiled a prototype for just such a “mind-reading” machine. The first emotionally aware computer is on trial at the Royal Society Festival of Science in London…
Once the software is perfected, Robinson believes it will revolutionise marketing. Cameras will be on computer monitors in internet cafes and behind telescreens in bars and waiting rooms. Computers will process our image and respond with adverts that connect to how we’re feeling.
Jul 05, 2006
Re-blogged from Networked Performance
The device is a substitute, personal container for the emotions of users who are not able, or do not want, to experience life through their own emotional perceptions. It looks like a Walkman-type device with headphones that does not play music inside, but rather outside, of the head. The device facilitates the user's emotional communication with the world around him. It is not designed for his/her personal use only but is equipped with a tiny amplifier.
The memory of the SD Audio Player chip card contains a great amount of data containing recordings of authentic human emotions. For instance, if the user finds himself in a situation where he has to argue with someone, yet he does not want to get into confrontation and to waste his own emotions, he locates a password on his SD Audio Player representing an appropriate emotional response, which he then applies accordingly.
The SD Audio Player can also record and thus appropriate other people's emotions: sniveling, peevishness, sobbing, moaning, crying, gradual emotional collapse, breakdown, yelling by a beaten person, the state of mind between laughter and crying, the hysterical family argument from Fellini's film Amarcord, pubescent giggling, comforting and fondling of a baby, a feeling of well-being, enthusiastic effusions, wearing somebody out, cuddling, soothing, etc. Such recordings, including those from movies, can be further edited and modified on a computer. In this way, the user can appropriate the emotions that are conveyed by celebrities and other prominent individuals.
Jul 03, 2006
the "wish project bar" is a tool which measures long-term goals or whishes, as it creates an emotional link to the passing of time. the progress bar runs for 18 years with each band represents 1 year, serving as a gentle reminder & requiring very little attention. Users can make a wish, set the time by turning the top cap & then leave to do its thing.
Jun 06, 2006
Re-blogged from Mocoloco
The Blushing Light designed by Nadine Jarvis and Jayne Potter blushes in response to the emotional pitch of a mobile phone. Through conversation, the lamp is activated by the Electromagnetic field (EMF) emitted from a mobile phone and continues blushing for 5 minutes after the call has ended; prolonging the memory of the otherwise transient conversation.
Jun 04, 2006
Re-blogged from Prototype/Interaction Design Cluster
emosive (formerly e:sense) is a new service for mobile devices which allows capturing, storing and sharing of fleeting emotional experiences. Based on the Cognitive Priming theory, as we become more immersed in digital media through our mobile devices, our personal media inventories constantly act as memory aids, “priming” us to better recollect associative, personal (episodic) memories when facing an external stimulus. Being mobile and in a dynamic environment, these recollections are moving, both emotionally and quickly away from us. Counting on the fact that near-today’s personal media inventories will be accessed from mobile devices and shared with a close collective, emosive bundles text, sound and image animation to allow capturing these fleeting emotional experiences, then sharing and reliving them with cared others. Playfully stemming from the technical, thin jargon of the mobile world (SMS, MMS), emosive proposes a new, light format of instant messages, dubbed “IFM” – Instant Feeling Messages.
May 15, 2006
In Aesthetic Computing, key scholars and practitioners from art, design, computer science, and mathematics lay the foundations for a discipline that applies the theory and practice of art to computing. Aesthetic computing explores the way art and aesthetics can play a role in different areas of computer science. One of its goals is to modify computer science by the application of the wide range of definitions and categories normally associated with making art. For example, structures in computing might be represented using the style of Gaudi or the Bauhaus school. This goes beyond the usual definition of aesthetics in computing, which most often refers to the formal, abstract qualities of such structures--a beautiful proof, or an elegant diagram. The contributors to this book discuss the broader spectrum of aesthetics--from abstract qualities of symmetry and form to ideas of creative expression and pleasure--in the context of computer science. The assumption behind aesthetic computing is that the field of computing will be enriched if it embraces all of aesthetics. Human-computer interaction will benefit--"usability," for example, could refer to improving a user's emotional state--and new models of learning will emerge.
Aesthetic Computing approaches its subject from a variety of perspectives. After defining the field and placing it in its historical context, the book looks at art and design, mathematics and computing, and interface and interaction. Contributions range from essays on the art of visualization and "the poesy of programming" to discussions of the aesthetics of mathematics throughout history and transparency and reflectivity in interface design.
May 13, 2006
From the project's web site:
MoBeeline is a compound word - Mobile + Beeline.
Our mobile service makes a straight line between two or more places.
This project is an emotional mobile service based on wearable technology. The basic focus of this project is to stimulate people's emotions with an interaction between mobile and wearable technology, and to develop a social network service between friends.The main goal is to create a wearable Bluetooth accessory that can receive data from a mobile phone. For example, let us assume that there are two mobile phone users. One user can send operative directions to the other's clothes as the user wants. Without the two users having to meet, they can share their feelings and emotions by sending signals to each other's clothes. Using our service, they will be changed the colors of each others garments, certain patterns or they can send emoticons to LEDs on the garment.
Watch the Video
May 10, 2006
From We Feel Fine project's website
Since August 2005, We Feel Fine, An Exploration of Human Emotion, in Six Movements, has been harvesting human feelings from a large number of weblogs. Every few minutes, the system searches the world's newly posted blog entries for occurrences of the phrases "I feel" and "I am feeling". When it finds such a phrase, it records the full sentence, up to the period, and identifies the "feeling" expressed in that sentence (e.g. sad, happy, depressed, etc.). Because blogs are structured in largely standard ways, the age, gender, and geographical location of the author can often be extracted and saved along with the sentence, as can the local weather conditions at the time the sentence was written. All of this information is saved.
The result is a database of several million human feelings, increasing by 15,000 - 20,000 new feelings per day. At its core, We Feel Fine - by Jonathan Harris & Sepandar Kamvar - is an artwork authored by everyone. It will grow and change as we grow and change, reflecting what's on our blogs, what's in our hearts, what's in our minds. We hope it makes the world seem a little smaller, and we hope it helps people see beauty in the everyday ups and downs of life.
Apr 21, 2006
Via Smart Mobs (New Scientist)
New Scientist has an article about a software called MoodViews that tracks mood swings across the 'blogosphere' and pinpoints the events behind them. Moodviews was created by Gilad Mishne and colleagues at Amsterdam University, The Netherlands. At present, MoodViews consists of three components, each offering a different view of global mood levels, the aggregate across all postings of the various moods:
- Moodgrapher tracks the global mood levels,
- Moodteller predicts them, and
- Moodsignals helps in understanding the underlying reasons for mood changes.
From the MoodViews website:
"Check out the impact of global events on global moods. Find out whether it is true that people drink more during the weekend. Observe states-of-mind with a cyclic nature; e.g., people feel energetic in the mornings and relaxed in the evening"
Apr 16, 2006
Via the Observer
30-year-old artist Christian Nold has created a "emotion mapping" device that allows people to compare their moods with their surroundings. It measures not just major reactions that tend to stick in the memory, but also the degrees of stimulation caused by speaking to a stranger, crossing the road or listening to birdsong. Emotion mapping is the result of the combination of two existing technologies: skin galvanic response sensor, which records the changing sweat levels on the skin as a measure of mental arousal and Global Positioning System. By calling up data from the finger cuffs, emotion mapping displays the user's fluctuating level of arousal, expressed as peaks and troughs along the route. So a walk down a country lane might produce only a mild curve. But dashing across a busy road or being confronted by a mugger might show up as a sudden spike.
And here is the tool and what Christian did with it. Its actually worth seeing - especially the "Google Earth" version of the emotionmap:
Apr 10, 2006
Via Linux devices
The "emotional lamp," is a WiFi-connected device that can be programmed to respond to real-world events by emanating sequences of gentle color.
Unlike a telephone or television, the lamp presents information without making intrusive or extensive time demands. Messages and information are diffused subtly into the general ambient, communicated through "color changes and their rate/rhythm of posting."
Customizable, built-in functions include multi-day weather forecasts, stock market monitoring, traffic conditions on a daily commute route, receipt of a large number of emails or email from an important person, or Web site updates containing specified key words. Additional built-in functions are planned.
Personalization features enable the creation of "bouquets" of friends authorized to interact with the Dal lamp through email, SMS, a Dal lamp of their own, or a telephone gateway service maintained by Violet. From the Violet Website: "The messages are colored animations that can be created for each type of emotion you want to show. A personal language and grammar can be created between two persons: only they know what the lamp is expressing."
The Dal lamp has been exhibited at some of the world's most prestigious museums, including the Centre Pompidou in Paris and The City of Science and Industry in Seoul, Korea. It received the "Star of the Observeur de design, 2004," a design award from the French Agency for the Promotion of Industrial Creations.
Mar 30, 2006
The "emotional social intelligence prosthetic" device, which El Kaliouby is constructing along with MIT colleagues Rosalind Picard and Alea Teeters, consists of a camera small enough to be pinned to the side of a pair of glasses, connected to a hand-held computer running image recognition software plus software that can read the emotions these images show. If the wearer seems to be failing to engage his or her listener, the software makes the hand-held computer vibrate.
Mar 20, 2006
emosive is a service for mobile devices which allows capturing, storing and sharing of fleeting emotional experiences. Based on the Cognitive PrimingGo theory, as we become more immersed in digital media through our mobile devices, our personal media inventories constantly act as memory aids, "priming" us to better recollect associative, personal (episodic) memories when facing an external stimulus. Being mobile and in a dynamic environment, these recollections are moving, both emotionally and quickly away from us. emosive bundles text, sound and image animation to allow capturing these fleeting emotional experiences, then sharing and reliving them with cared others. emosive proposes a new format of instant messages, dubbed IFM – Instant Feeling Messages.
While walking in the park and listening to a verse from his and his girlfriend Tina’s favorite tune – Madonna’s Little Star (“Never forget how to dream, Butterfly”), Jake sees a butterfly on a flower. Primed by the romantic musical immersion, Jake notices the colors of the butterfly and immediately loads a memory of Tina’s same-colored summer dress. Jake quickly clicks the emosive shortcut key sequence on his device. He snaps a photo of the butterfly and tags the image as "Butterfly". As Jack walks around the city, he captures other fleeting moments, making sure they are tagged to correspond with lyric words. He even adds some tagged images from his Flickr account. He then "wraps" everything as an IFM, previews it and sends it to Tina. When Tina accepts the IFM, it will stream to her phone and synchronize the tune and the images, based on the tagged lyric words. The stored IFM can also be viewed effectively as an emosive experience from any web-enabled browser.
Mar 16, 2006
Via Pink Tentacle
KOTOHANA is a flower-shaped terminal which allows to remotely communicate human emotions using LED light. LEDs change color according to the emotions felt by the remote person. Emotional state of the remote person is inferred by analysing affective correlates of voice; results of the analysis are sent via wireless LAN to the other terminal, where it is expressed as LED light. KOTOHANA is a joint project of NEC, NEC Design and SGI Japan
Feb 11, 2006
Jan 27, 2006
Jan 20, 2006
Jan 16, 2006
Via smart mobs
Reseachers at Fraunhofer Institute are working on a system, which should be capable of estimating human emotions. Taking advantages of latest developments in image analysis, sensors and psychophysiology, their ultimative goal is to train computers to interpret users' emotions and to respond accordingly.
Read the full press release from the Institute website
Jan 09, 2006
A paper by Andor Dornbush, Kevin Fisher, Kyle McKay, Alex Prikhodko, and Zary Segall describe a mobile MP3 player, the Xbox, which is able to automatically select the song best suited to the emotive situation of the user.
Here is an excerpt from the article (I am quoting it from Nicholas' blog Pasta and Vinegar).
the notion of collecting human emotion and activity information from the user, and explore how this information could be used to improve the user experience with mobile music players.
a mobile MP3 player, XPod, which is able to automate the process of selecting the song best suited to the emotion and the current activity of the user. The XPod concept is based on the idea of automating much of the interaction between the music player and its user.
After an initial training period, the XPod is able to use its internal algorithms to make an educated selection of the song that would best fit its user’s emotion and situation. We use the data gathered from a streaming version of the BodyMedia SenseWear to detect different levels of user activity and emotion. After determining the state of the user the neural network engine compares the user’s current state, time, and activity levels to past user song preferences matching the existing set of conditions and makes a musical selection. The XPod system was trained to play different music based on the user’s activity level. A simple pattern was used so the state dependant customization could be verified. XPod successfully learned the pattern of listening behavior exhibited by the test user. As the training proceeded the XPod learned the desired behavior and chose music to match the preferences of the test user. XPod automates the process of choosing music best suited for a user’s current activity. The success of the initial implementation of XPod concepts provides the basis for further exploration of human- and emotion-aware mobile music players.
Dec 23, 2005
Psychotherapists have begun to experience the emersion of “new addictions.” These new addictions, which are directly connected to our advancing technological environment, have rooted over the past few years and patients experiencing the negative consequences are increasingly presenting for treatment in psychotherapy practices
Learn more about cyberaddiction here