Jun 21, 2016
New book on Human Computer Confluence - FREE PDF!
Two good news for Positive Technology followers.
1) Our new book on Human Computer Confluence is out!
2) It can be downloaded for free here
Human-computer confluence refers to an invisible, implicit, embodied or even implanted interaction between humans and system components. New classes of user interfaces are emerging that make use of several sensors and are able to adapt their physical properties to the current situational context of users.
A key aspect of human-computer confluence is its potential for transforming human experience in the sense of bending, breaking and blending the barriers between the real, the virtual and the augmented, to allow users to experience their body and their world in new ways. Research on Presence, Embodiment and Brain-Computer Interface is already exploring these boundaries and asking questions such as: Can we seamlessly move between the virtual and the real? Can we assimilate fundamentally new senses through confluence?
The aim of this book is to explore the boundaries and intersections of the multidisciplinary field of HCC and discuss its potential applications in different domains, including healthcare, education, training and even arts.
DOWNLOAD THE FULL BOOK HERE AS OPEN ACCESS
Please cite as follows:
Andrea Gaggioli, Alois Ferscha, Giuseppe Riva, Stephen Dunne, Isabell Viaud-Delmon (2016). Human computer confluence: transforming human experience through symbiotic technologies. Warsaw: De Gruyter. ISBN 9783110471120.
09:53 Posted in AI & robotics, Augmented/mixed reality, Biofeedback & neurofeedback, Blue sky, Brain training & cognitive enhancement, Brain-computer interface, Cognitive Informatics, Cyberart, Cybertherapy, Emotional computing, Enactive interfaces, Future interfaces, ICT and complexity, Neurotechnology & neuroinformatics, Positive Technology events, Research tools, Self-Tracking, Serious games, Technology & spirituality, Telepresence & virtual presence, Virtual worlds, Wearable & mobile | Permalink
May 26, 2016
From User Experience (UX) to Transformative User Experience (T-UX)
In 1999, Joseph Pine and James Gilmore wrote a seminal book titled “The Experience Economy” (Harvard Business School Press, Boston, MA) that theorized the shift from a service-based economy to an experience-based economy.
According to these authors, in the new experience economy the goal of the purchase is no longer to own a product (be it a good or service), but to use it in order to enjoy a compelling experience. An experience, thus, is a whole-new type of offer: in contrast to commodities, goods and services, it is designed to be as personal and memorable as possible. Just as in a theatrical representation, companies stage meaningful events to engage customers in a memorable and personal way, by offering activities that provide engaging and rewarding experiences.
Indeed, if one looks back at the past ten years, the concept of experience has become more central to several fields, including tourism, architecture, and – perhaps more relevant for this column – to human-computer interaction, with the rise of “User Experience” (UX).
The concept of UX was introduced by Donald Norman in a 1995 article published on the CHI proceedings (D. Norman, J. Miller, A. Henderson: What You See, Some of What's in the Future, And How We Go About Doing It: HI at Apple Computer. Proceedings of CHI 1995, Denver, Colorado, USA). Norman argued that focusing exclusively on usability attribute (i.e. easy of use, efficacy, effectiveness) when designing an interactive product is not enough; one should take into account the whole experience of the user with the system, including users’ emotional and contextual needs. Since then, the UX concept has assumed an increasing importance in HCI. As McCarthy and Wright emphasized in their book “Technology as Experience” (MIT Press, 2004):
“In order to do justice to the wide range of influences that technology has in our lives, we should try to interpret the relationship between people and technology in terms of the felt life and the felt or emotional quality of action and interaction.” (p. 12).
However, according to Pine and Gilmore experience may not be the last step of what they call as “Progression of Economic Value”. They speculated further into the future, by identifying the “Transformation Economy” as the likely next phase. In their view, while experiences are essentially memorable events which stimulate the sensorial and emotional levels, transformations go much further in that they are the result of a series of experiences staged by companies to guide customers learning, taking action and eventually achieving their aspirations and goals.
In Pine and Gilmore terms, an aspirant is the individual who seeks advice for personal change (i.e. a better figure, a new career, and so forth), while the provider of this change (a dietist, a university) is an elictor. The elictor guide the aspirant through a series of experiences which are designed with certain purpose and goals. According to Pine and Gilmore, the main difference between an experience and a transformation is that the latter occurs when an experience is customized:
“When you customize an experience to make it just right for an individual - providing exactly what he needs right now - you cannot help changing that individual. When you customize an experience, you automatically turn it into a transformation, which companies create on top of experiences (recall that phrase: “a life-transforming experience”), just as they create experiences on top of services and so forth” (p. 244).
A further key difference between experiences and transformations concerns their effects: because an experience is inherently personal, no two people can have the same one. Likewise, no individual can undergo the same transformation twice: the second time it’s attempted, the individual would no longer be the same person (p. 254-255).
But what will be the impact of this upcoming, “transformation economy” on how people relate with technology? If in the experience economy the buzzword is “User Experience”, in the next stage the new buzzword might be “User Transformation”.
Indeed, we can see some initial signs of this shift. For example, FitBit and similar self-tracking gadgets are starting to offer personalized advices to foster enduring changes in users’ lifestyle; another example is from the fields of ambient intelligence and domotics, where there is an increasing focus towards designing systems that are able to learn from the user’s behaviour (i.e. by tracking the movement of an elderly in his home) to provide context-aware adaptive services (i.e. sending an alert when the user is at risk of falling).
But likely, the most important ICT step towards the transformation economy could take place with the introduction of next-generation immersive virtual reality systems. Since these new systems are based on mobile devices (an example is the recent partnership between Oculus and Samsung), they are able to deliver VR experiences that incorporate information on the external/internal context of the user (i.e. time, location, temperature, mood etc) by using the sensors incapsulated in the mobile phone.
By personalizing the immersive experience with context-based information, it might be possibile to induce higher levels of involvement and presence in the virtual environment. In case of cyber-therapeutic applications, this could translate into the development of more effective, transformative virtual healing experiences.
Furthermore, the emergence of "symbiotic technologies", such as neuroprosthetic devices and neuro-biofeedback, is enabling a direct connection between the computer and the brain. Increasingly, these neural interfaces are moving from the biomedical domain to become consumer products. But unlike existing digital experiential products, symbiotic technologies have the potential to transform more radically basic human experiences.
Brain-computer interfaces, immersive virtual reality and augmented reality and their various combinations will allow users to create “personalized alterations” of experience. Just as nowadays we can download and install a number of “plug-ins”, i.e. apps to personalize our experience with hardware and software products, so very soon we may download and install new “extensions of the self”, or “experiential plug-ins” which will provide us with a number of options for altering/replacing/simulating our sensorial, emotional and cognitive processes.
Such mediated recombinations of human experience will result from of the application of existing neuro-technologies in completely new domains. Although virtual reality and brain-computer interface were originally developed for applications in specific domains (i.e. military simulations, neurorehabilitation, etc), today the use of these technologies has been extended to other fields of application, ranging from entertainment to education.
In the field of biology, Stephen Jay Gould and Elizabeth Vrba (Paleobiology, 8, 4-15, 1982) have defined “exaptation” the process in which a feature acquires a function that was not acquired through natural selection. Likewise, the exaptation of neurotechnologies to the digital consumer market may lead to the rise of a novel “neuro-experience economy”, in which technology-mediated transformation of experience is the main product.
Just as a Genetically-Modified Organism (GMO) is an organism whose genetic material is altered using genetic-engineering techniques, so we could define aTechnologically-Modified Experience (ETM) a re-engineered experience resulting from the artificial manipulation of neurobiological bases of sensorial, affective, and cognitive processes.
Clearly, the emergence of the transformative neuro-experience economy will not happen in weeks or months but rather in years. It will take some time before people will find brain-computer devices on the shelves of electronic stores: most of these tools are still in the pre-commercial phase at best, and some are found only in laboratories.
Nevertheless, the mere possibility that such scenario will sooner or later come to pass, raises important questions that should be addressed before symbiotic technologies will enter our lives: does technological alteration of human experience threaten the autonomy of individuals, or the authenticity of their lives? How can we help individuals decide which transformations are good or bad for them?
Answering these important issues will require the collaboration of many disciplines, including philosophy, computer ethics and, of course, cyberpsychology.
May 24, 2016
Virtual reality painting tool
12:07 Posted in Blue sky, Future interfaces, Virtual worlds | Permalink | Comments (0)
SignAloud: Gloves that Transliterate Sign Language into Text and Speech
10:25 Posted in Future interfaces, Wearable & mobile | Permalink | Comments (0)
Apr 27, 2016
Predictive Technologies: Can Smart Tools Augment the Brain's Predictive Abilities?
Aug 31, 2014
Information Entropy
Information – Entropy by Oliver Reichenstein
Will information technology affect our minds the same way the environment was affected by our analogue technology? Designers hold a key position in dealing with ever increasing data pollution. We are mostly focussed on speeding things up, on making sharing easier, faster, more accessible. But speed, usability, accessibility are not the main issue anymore. The main issues are not technological, they are structural, processual. What we lack is clarity, correctness, depth, time. Are there counter-techniques we can employ to turn data into information, information into knowledge, knowledge into wisdom?
Oliver Reichenstein — Information Entropy (SmashingConf NYC 2014) from Smashing Magazine on Vimeo.
Jun 30, 2014
Never do a Tango with an Eskimo
Apr 15, 2014
Avegant - Glyph Kickstarter - Wearable Retinal Display
Via Mashable
Move over Google Glass and Oculus Rift, there's a new kid on the block: Glyph, a mobile, personal theater.
Glyph looks like a normal headset and operates like one, too. That is, until you move the headband down over your eyes and it becomes a fully-functional visual visor that displays movies, television shows, video games or any other media connected via the attached HDMI cable.
Using Virtual Retinal Display (VRD), a technology that mimics the way we see light, the Glyph projects images directly onto your retina using one million micromirrors in each eye piece. These micromirrors reflect the images back to the retina, producing a reportedly crisp and vivid quality.
22:56 Posted in Future interfaces, Telepresence & virtual presence, Virtual worlds, Wearable & mobile | Permalink | Comments (0)
Mar 03, 2014
By licking these electric ice cream cones, you can make music
From Wired
Ice cream can be the reward after a successful little league game, a consolation after a bad breakup, or, in the hands of gourmet geeks, a sweet musical instrument. Designers Carla Diana and Emilie Baltz recently whipped up a musical performance where a quartet of players jammed using just a quart of vanilla ice cream and some high-tech cones
00:07 Posted in Creativity and computers, Cyberart, Future interfaces | Permalink | Comments (0)
Jan 23, 2014
Transparent display @MIT
The innovative system is described in a paper published in the journal Nature Communications, co-authored by MIT professors Marin Soljačić and John Joannopoulos, graduate student Chia Wei Hsu, and four others.
Abstract of Nature Communications paper:
The ability to display graphics and texts on a transparent screen can enable many useful applications. Here we create a transparent display by projecting monochromatic images onto a transparent medium embedded with nanoparticles that selectively scatter light at the projected wavelength. We describe the optimal design of such nanoparticles, and experimentally demonstrate this concept with a blue-color transparent display made of silver nanoparticles in a polymer matrix. This approach has attractive features including simplicity, wide viewing angle, scalability to large sizes and low cost.
21:11 Posted in Future interfaces | Permalink | Comments (0)
Jan 20, 2014
The Future of Gesture Control - Introducing Myo
Thalmic Labs at TEDxToronto
23:17 Posted in Enactive interfaces, Future interfaces, Wearable & mobile | Permalink | Comments (0)
Jan 12, 2014
Wearable Pregnancy Ultrasound
Melody Shiue, an industrial design graduate from University of New South Wales, is proposing a wearable fetal ultrasound system to enhancing maternal-fetal bonding as a reassurance window. It is an e-textile based apparatus that uses 4D ultrasound. Latest stretchable display technology is also employed on the abdominal region, allowing other members of the family especially the father to connect with the foetus in its context. PreVue not only gives you the opportunity to interact and comprehend the physical growth of the baby, but also an early understanding of its personality as you see it yawning, rolling, smiling etc., bringing you closer till the day it finally rests into your arms.
More information at Tuvie
16:25 Posted in Future interfaces, Wearable & mobile | Permalink | Comments (0)
Dec 24, 2013
NeuroOn mask improves sleep and helps manage jet lag
Via Medgadget
A group of Polish engineers is working on a smart sleeping mask that they hope will allow people to get more out of their resting time, as well as allow for unusual sleeping schedules that would particularly benefit those who are often on-call. The NeuroOn mask will have an embedded EEG for brain wave monitoring, EMG for detecting muscle motion on the face, and sensors that can track whether your pupils are moving and whether they are going through REM. The team is currently raising money on Kickstarter where you can pre-order your own NeuroOn once it’s developed into a final product.
Dec 08, 2013
iMirror
Take back your mornings with the iMirror – the interactive mirror for your home. Watch the video for a live demo!
23:26 Posted in Augmented/mixed reality, Future interfaces | Permalink | Comments (0)
Nov 20, 2013
inFORM
inFORM is a Dynamic Shape Display developed by MIT Tangible Media Group that can render 3D content physically, so users can interact with digital information in a tangible way.
inFORM can also interact with the physical world around it, for example moving objects on the table’s surface.
Remote participants in a video conference can be displayed physically, allowing for a strong sense of presence and the ability to interact physically at a distance.
Nov 16, 2013
Phonebloks
Phonebloks is a modular smartphone concept created by Dutch designer Dave Hakkens to reduce electronic waste. By attaching individual third-party components (called "bloks") to a main board, a user would create a personalized smartphone. These bloks can be replaced at will if they break or the user wishes to upgrade.
15:24 Posted in Future interfaces, Wearable & mobile | Permalink | Comments (0)
Aug 07, 2013
What Color Is Your Night Light? It May Affect Your Mood
When it comes to some of the health hazards of light at night, a new study suggests that the color of the light can make a big difference.
Read full story on Science Daily
13:46 Posted in Emotional computing, Future interfaces, Research tools | Permalink | Comments (0)
Mar 03, 2013
Brain-to-brain communication between rats achieved
From Duke Medicine News and Communications
Researchers at Duke University Medical Center in the US report in the February 28, 2013 issue of Scientific Reports the successful wiring together of sensory areas in the brains of two rats. The result of the experiment is that one rat will respond to the experiences to which the other is exposed.
The results of these projects suggest the future potential for linking multiple brains to form what the research team is calling an "organic computer," which could allow sharing of motor and sensory information among groups of animals.
"Our previous studies with brain-machine interfaces had convinced us that the rat brain was much more plastic than we had previously thought," said Miguel Nicolelis, M.D., PhD, lead author of the publication and professor of neurobiology at Duke University School of Medicine. "In those experiments, the rat brain was able to adapt easily to accept input from devices outside the body and even learn how to process invisible infrared light generated by an artificial sensor. So, the question we asked was, ‘if the brain could assimilate signals from artificial sensors, could it also assimilate information input from sensors from a different body?’"
To test this hypothesis, the researchers first trained pairs of rats to solve a simple problem: to press the correct lever when an indicator light above the lever switched on, which rewarded the rats with a sip of water. They next connected the two animals' brains via arrays of microelectrodes inserted into the area of the cortex that processes motor information.
One of the two rodents was designated as the "encoder" animal. This animal received a visual cue that showed it which lever to press in exchange for a water reward. Once this “encoder” rat pressed the right lever, a sample of its brain activity that coded its behavioral decision was translated into a pattern of electrical stimulation that was delivered directly into the brain of the second rat, known as the "decoder" animal.
The decoder rat had the same types of levers in its chamber, but it did not receive any visual cue indicating which lever it should press to obtain a reward. Therefore, to press the correct lever and receive the reward it craved, the decoder rat would have to rely on the cue transmitted from the encoder via the brain-to-brain interface.
The researchers then conducted trials to determine how well the decoder animal could decipher the brain input from the encoder rat to choose the correct lever. The decoder rat ultimately achieved a maximum success rate of about 70 percent, only slightly below the possible maximum success rate of 78 percent that the researchers had theorized was achievable based on success rates of sending signals directly to the decoder rat’s brain.
Importantly, the communication provided by this brain-to-brain interface was two-way. For instance, the encoder rat did not receive a full reward if the decoder rat made a wrong choice. The result of this peculiar contingency, said Nicolelis, led to the establishment of a "behavioral collaboration" between the pair of rats.
"We saw that when the decoder rat committed an error, the encoder basically changed both its brain function and behavior to make it easier for its partner to get it right," Nicolelis said. "The encoder improved the signal-to-noise ratio of its brain activity that represented the decision, so the signal became cleaner and easier to detect. And it made a quicker, cleaner decision to choose the correct lever to press. Invariably, when the encoder made those adaptations, the decoder got the right decision more often, so they both got a better reward."
In a second set of experiments, the researchers trained pairs of rats to distinguish between a narrow or wide opening using their whiskers. If the opening was narrow, they were taught to nose-poke a water port on the left side of the chamber to receive a reward; for a wide opening, they had to poke a port on the right side.
The researchers then divided the rats into encoders and decoders. The decoders were trained to associate stimulation pulses with the left reward poke as the correct choice, and an absence of pulses with the right reward poke as correct. During trials in which the encoder detected the opening width and transmitted the choice to the decoder, the decoder had a success rate of about 65 percent, significantly above chance.
To test the transmission limits of the brain-to-brain communication, the researchers placed an encoder rat in Brazil, at the Edmond and Lily Safra International Institute of Neuroscience of Natal (ELS-IINN), and transmitted its brain signals over the Internet to a decoder rat in Durham, N.C. They found that the two rats could still work together on the tactile discrimination task.
"So, even though the animals were on different continents, with the resulting noisy transmission and signal delays, they could still communicate," said Miguel Pais-Vieira, PhD, a postdoctoral fellow and first author of the study. "This tells us that it could be possible to create a workable, network of animal brains distributed in many different locations."
Nicolelis added, "These experiments demonstrated the ability to establish a sophisticated, direct communication linkage between rat brains, and that the decoder brain is working as a pattern-recognition device. So basically, we are creating an organic computer that solves a puzzle."
"But in this case, we are not inputting instructions, but rather only a signal that represents a decision made by the encoder, which is transmitted to the decoder’s brain which has to figure out how to solve the puzzle. So, we are creating a single central nervous system made up of two rat brains,” said Nicolelis. He pointed out that, in theory, such a system is not limited to a pair of brains, but instead could include a network of brains, or “brain-net.” Researchers at Duke and at the ELS-IINN are now working on experiments to link multiple animals cooperatively to solve more complex behavioral tasks.
"We cannot predict what kinds of emergent properties would appear when animals begin interacting as part of a brain-net. In theory, you could imagine that a combination of brains could provide solutions that individual brains cannot achieve by themselves," continued Nicolelis. Such a connection might even mean that one animal would incorporate another's sense of "self," he said.
"In fact, our studies of the sensory cortex of the decoder rats in these experiments showed that the decoder's brain began to represent in its tactile cortex not only its own whiskers, but the encoder rat's whiskers, too. We detected cortical neurons that responded to both sets of whiskers, which means that the rat created a second representation of a second body on top of its own." Basic studies of such adaptations could lead to a new field that Nicolelis calls the "neurophysiology of social interaction."
Such complex experiments will be enabled by the laboratory's ability to record brain signals from almost 2,000 brain cells at once. The researchers hope to record the electrical activity produced simultaneously by 10-30,000 cortical neurons in the next five years.
Such massive brain recordings will enable more precise control of motor neuroprostheses—such as those being developed by the Walk Again Project—to restore motor control to paralyzed people, Nicolelis said.
More to explore:
Sci. Rep. 3, 1319 (2013). PUBMED
, , , &14:20 Posted in Brain-computer interface, Future interfaces | Permalink | Comments (0)
2nd Summer School on Human Computer Confluence
Date: 17th, 18th and 19th July 2013
Venue: IRCAM
Location: Paris, France
Website: http://www.ircam.fr/
The 2nd HCC summer school aims to share scientific knowledge and experience among participants, enhance and stimulate interdisciplinary dialogue as well as provide further opportunities for co-operation within the study domains of Human Computer Confluence.
The topics of the summer school will be framed around the following issues:
• re-experience yourself,
• experience being others,
• experience being together in more powerful ways,
• experience other environments,
• experience new senses,
• experience abstract data spaces.
The 2nd HCC summer school will try to benefit most from the research interests and the special facilities of the IRCAM institute, the last as a place dedicated to the coupling of art with the sciences of sound and media. Special attention will be given to the following thematic categories:
• Musical interfaces
• Interactive sound design
• Sensorimotor learning and gesture-sound interactive systems
• Croudsourcing and human computation approaches in artistic applications
The three-day summer school will include invited lectures by experts in the field, a round-table and practical workshops. During the workshops, participants will engage in hands-on HCC group projects that they will present at the end of the summer school.
Program committee
• Isabelle Viaud-Delmon, Acoustic and cognitive spaces team, CNRS - IRCAM, France.
• Andrea Gaggioli, Department of Psychology, UCSC, Milan, Italy.
• Stephen Dunne, Neuroscience Department, STARLAB, Barcelona, Spain.
• Alois Ferscha, Pervasive computing lab, Johannes Kepler Universitat Linz, Austria.
• Fivos Maniatakos, Acoustic and Cognitive Spaces Group, IRCAM, France.
Organisation committee
• Isabelle Viaud-Delmon, IRCAM
• Hugues Vinet, IRCAM
• Marine Taffou, IRCAM
• Sylvie Benoit, IRCAM
• Fivos Maniatakos, IRCAM
14:00 Posted in Future interfaces, Positive Technology events | Permalink | Comments (0)
Aug 04, 2012
The Age of ‘Wearatronics’
Medgadget has an interesting article on the raise of ‘Wearatronics’, a new trend in which new materials and interconnects have made circuit assemblies flexible and, as a result, embeddable. Such flexible electronic arrays may be embedded into textiles in order to, for example, measure the wearer’s vital signs or even generate and store power.
According to GigaOm's research report The wearable-computing market: a global analysis the wearatronics market for just health and fitness products is estimated to reach 170 million devices within the next five years.
In this video, Bloomberg's Sheila Dharmarajan reports on the outlook for wearable electronics on Bloomberg Television's "Bloomberg West." (Source: Bloomberg)
Also interesting is this TED speech from David Icke, who creates breathable, implantable microcomputers that conform to the human body, which can be used for a variety of medical applications.
18:10 Posted in Future interfaces, Pervasive computing, Wearable & mobile | Permalink | Comments (0)