Jun 21, 2016
New book on Human Computer Confluence - FREE PDF!
Two good news for Positive Technology followers.
1) Our new book on Human Computer Confluence is out!
2) It can be downloaded for free here
Human-computer confluence refers to an invisible, implicit, embodied or even implanted interaction between humans and system components. New classes of user interfaces are emerging that make use of several sensors and are able to adapt their physical properties to the current situational context of users.
A key aspect of human-computer confluence is its potential for transforming human experience in the sense of bending, breaking and blending the barriers between the real, the virtual and the augmented, to allow users to experience their body and their world in new ways. Research on Presence, Embodiment and Brain-Computer Interface is already exploring these boundaries and asking questions such as: Can we seamlessly move between the virtual and the real? Can we assimilate fundamentally new senses through confluence?
The aim of this book is to explore the boundaries and intersections of the multidisciplinary field of HCC and discuss its potential applications in different domains, including healthcare, education, training and even arts.
DOWNLOAD THE FULL BOOK HERE AS OPEN ACCESS
Please cite as follows:
Andrea Gaggioli, Alois Ferscha, Giuseppe Riva, Stephen Dunne, Isabell Viaud-Delmon (2016). Human computer confluence: transforming human experience through symbiotic technologies. Warsaw: De Gruyter. ISBN 9783110471120.
09:53 Posted in AI & robotics, Augmented/mixed reality, Biofeedback & neurofeedback, Blue sky, Brain training & cognitive enhancement, Brain-computer interface, Cognitive Informatics, Cyberart, Cybertherapy, Emotional computing, Enactive interfaces, Future interfaces, ICT and complexity, Neurotechnology & neuroinformatics, Positive Technology events, Research tools, Self-Tracking, Serious games, Technology & spirituality, Telepresence & virtual presence, Virtual worlds, Wearable & mobile | Permalink
May 26, 2016
From User Experience (UX) to Transformative User Experience (T-UX)
In 1999, Joseph Pine and James Gilmore wrote a seminal book titled “The Experience Economy” (Harvard Business School Press, Boston, MA) that theorized the shift from a service-based economy to an experience-based economy.
According to these authors, in the new experience economy the goal of the purchase is no longer to own a product (be it a good or service), but to use it in order to enjoy a compelling experience. An experience, thus, is a whole-new type of offer: in contrast to commodities, goods and services, it is designed to be as personal and memorable as possible. Just as in a theatrical representation, companies stage meaningful events to engage customers in a memorable and personal way, by offering activities that provide engaging and rewarding experiences.
Indeed, if one looks back at the past ten years, the concept of experience has become more central to several fields, including tourism, architecture, and – perhaps more relevant for this column – to human-computer interaction, with the rise of “User Experience” (UX).
The concept of UX was introduced by Donald Norman in a 1995 article published on the CHI proceedings (D. Norman, J. Miller, A. Henderson: What You See, Some of What's in the Future, And How We Go About Doing It: HI at Apple Computer. Proceedings of CHI 1995, Denver, Colorado, USA). Norman argued that focusing exclusively on usability attribute (i.e. easy of use, efficacy, effectiveness) when designing an interactive product is not enough; one should take into account the whole experience of the user with the system, including users’ emotional and contextual needs. Since then, the UX concept has assumed an increasing importance in HCI. As McCarthy and Wright emphasized in their book “Technology as Experience” (MIT Press, 2004):
“In order to do justice to the wide range of influences that technology has in our lives, we should try to interpret the relationship between people and technology in terms of the felt life and the felt or emotional quality of action and interaction.” (p. 12).
However, according to Pine and Gilmore experience may not be the last step of what they call as “Progression of Economic Value”. They speculated further into the future, by identifying the “Transformation Economy” as the likely next phase. In their view, while experiences are essentially memorable events which stimulate the sensorial and emotional levels, transformations go much further in that they are the result of a series of experiences staged by companies to guide customers learning, taking action and eventually achieving their aspirations and goals.
In Pine and Gilmore terms, an aspirant is the individual who seeks advice for personal change (i.e. a better figure, a new career, and so forth), while the provider of this change (a dietist, a university) is an elictor. The elictor guide the aspirant through a series of experiences which are designed with certain purpose and goals. According to Pine and Gilmore, the main difference between an experience and a transformation is that the latter occurs when an experience is customized:
“When you customize an experience to make it just right for an individual - providing exactly what he needs right now - you cannot help changing that individual. When you customize an experience, you automatically turn it into a transformation, which companies create on top of experiences (recall that phrase: “a life-transforming experience”), just as they create experiences on top of services and so forth” (p. 244).
A further key difference between experiences and transformations concerns their effects: because an experience is inherently personal, no two people can have the same one. Likewise, no individual can undergo the same transformation twice: the second time it’s attempted, the individual would no longer be the same person (p. 254-255).
But what will be the impact of this upcoming, “transformation economy” on how people relate with technology? If in the experience economy the buzzword is “User Experience”, in the next stage the new buzzword might be “User Transformation”.
Indeed, we can see some initial signs of this shift. For example, FitBit and similar self-tracking gadgets are starting to offer personalized advices to foster enduring changes in users’ lifestyle; another example is from the fields of ambient intelligence and domotics, where there is an increasing focus towards designing systems that are able to learn from the user’s behaviour (i.e. by tracking the movement of an elderly in his home) to provide context-aware adaptive services (i.e. sending an alert when the user is at risk of falling).
But likely, the most important ICT step towards the transformation economy could take place with the introduction of next-generation immersive virtual reality systems. Since these new systems are based on mobile devices (an example is the recent partnership between Oculus and Samsung), they are able to deliver VR experiences that incorporate information on the external/internal context of the user (i.e. time, location, temperature, mood etc) by using the sensors incapsulated in the mobile phone.
By personalizing the immersive experience with context-based information, it might be possibile to induce higher levels of involvement and presence in the virtual environment. In case of cyber-therapeutic applications, this could translate into the development of more effective, transformative virtual healing experiences.
Furthermore, the emergence of "symbiotic technologies", such as neuroprosthetic devices and neuro-biofeedback, is enabling a direct connection between the computer and the brain. Increasingly, these neural interfaces are moving from the biomedical domain to become consumer products. But unlike existing digital experiential products, symbiotic technologies have the potential to transform more radically basic human experiences.
Brain-computer interfaces, immersive virtual reality and augmented reality and their various combinations will allow users to create “personalized alterations” of experience. Just as nowadays we can download and install a number of “plug-ins”, i.e. apps to personalize our experience with hardware and software products, so very soon we may download and install new “extensions of the self”, or “experiential plug-ins” which will provide us with a number of options for altering/replacing/simulating our sensorial, emotional and cognitive processes.
Such mediated recombinations of human experience will result from of the application of existing neuro-technologies in completely new domains. Although virtual reality and brain-computer interface were originally developed for applications in specific domains (i.e. military simulations, neurorehabilitation, etc), today the use of these technologies has been extended to other fields of application, ranging from entertainment to education.
In the field of biology, Stephen Jay Gould and Elizabeth Vrba (Paleobiology, 8, 4-15, 1982) have defined “exaptation” the process in which a feature acquires a function that was not acquired through natural selection. Likewise, the exaptation of neurotechnologies to the digital consumer market may lead to the rise of a novel “neuro-experience economy”, in which technology-mediated transformation of experience is the main product.
Just as a Genetically-Modified Organism (GMO) is an organism whose genetic material is altered using genetic-engineering techniques, so we could define aTechnologically-Modified Experience (ETM) a re-engineered experience resulting from the artificial manipulation of neurobiological bases of sensorial, affective, and cognitive processes.
Clearly, the emergence of the transformative neuro-experience economy will not happen in weeks or months but rather in years. It will take some time before people will find brain-computer devices on the shelves of electronic stores: most of these tools are still in the pre-commercial phase at best, and some are found only in laboratories.
Nevertheless, the mere possibility that such scenario will sooner or later come to pass, raises important questions that should be addressed before symbiotic technologies will enter our lives: does technological alteration of human experience threaten the autonomy of individuals, or the authenticity of their lives? How can we help individuals decide which transformations are good or bad for them?
Answering these important issues will require the collaboration of many disciplines, including philosophy, computer ethics and, of course, cyberpsychology.
Apr 27, 2016
Predictive Technologies: Can Smart Tools Augment the Brain's Predictive Abilities?
Oct 06, 2014
Google Glass can now display captions for hard-of-hearing users
Georgia Institute of Technology researchers have created a speech-to-text Android app for Google Glass that displays captions for hard-of-hearing persons when someone is talking to them in person.
“This system allows wearers like me to focus on the speaker’s lips and facial gestures, “said School of Interactive Computing Professor Jim Foley.
“If hard-of-hearing people understand the speech, the conversation can continue immediately without waiting for the caption. However, if I miss a word, I can glance at the transcription, get the word or two I need and get back into the conversation.”
Captioning on Glass display captions for the hard-of-hearing (credit: Georgia Tech)
The “Captioning on Glass” app is now available to install from MyGlass. More information here.
Foley and the students are working with the Association of Late Deafened Adults in Atlanta to improve the program. An iPhone app is planned.
Jun 30, 2014
Never do a Tango with an Eskimo
Apr 06, 2014
The effects of augmented visual feedback during balance training in Parkinson's disease - trial protocol
The effects of augmented visual feedback during balance training in Parkinson's disease: study design of a randomized clinical trial.
BMC Neurol. 2013;13:137
Authors: van den Heuvel MR, van Wegen EE, de Goede CJ, Burgers-Bots IA, Beek PJ, Daffertshofer A, Kwakkel G
Abstract. BACKGROUND: Patients with Parkinson's disease often suffer from reduced mobility due to impaired postural control. Balance exercises form an integral part of rehabilitative therapy but the effectiveness of existing interventions is limited. Recent technological advances allow for providing enhanced visual feedback in the context of computer games, which provide an attractive alternative to conventional therapy. The objective of this randomized clinical trial is to investigate whether a training program capitalizing on virtual-reality-based visual feedback is more effective than an equally-dosed conventional training in improving standing balance performance in patients with Parkinson's disease.
METHODS/DESIGN: Patients with idiopathic Parkinson's disease will participate in a five-week balance training program comprising ten treatment sessions of 60 minutes each. Participants will be randomly allocated to (1) an experimental group that will receive balance training using augmented visual feedback, or (2) a control group that will receive balance training in accordance with current physical therapy guidelines for Parkinson's disease patients. Training sessions consist of task-specific exercises that are organized as a series of workstations. Assessments will take place before training, at six weeks, and at twelve weeks follow-up. The functional reach test will serve as the primary outcome measure supplemented by comprehensive assessments of functional balance, posturography, and electroencephalography. DISCUSSION: We hypothesize that balance training based on visual feedback will show greater improvements on standing balance performance than conventional balance training. In addition, we expect that learning new control strategies will be visible in the co-registered posturographic recordings but also through changes in functional connectivity.
23:35 Posted in Augmented/mixed reality, Cybertherapy | Permalink | Comments (0)
Dec 08, 2013
iMirror
Take back your mornings with the iMirror – the interactive mirror for your home. Watch the video for a live demo!
23:26 Posted in Augmented/mixed reality, Future interfaces | Permalink | Comments (0)
Nov 28, 2013
OutRun: Augmented Reality Driving Video Game
Garnet Hertz's video game concept car combines a car-shaped arcade game cabinet with a real world electric vehicle to produce a video game system that actually drives. OutRun offers a unique mixed reality simulation as one physically drives through an 8-bit video game. The windshield of the system features custom software that transforms the real world into an 8-bit video game, enabling the user to have limitless gameplay opportunities while driving. Hertz has designed OutRun to de-simulate the driving component of a video game: where game simulations strive to be increasingly realistic (usually focused on graphics), this system pursues "real" driving through the game. Additionally, playing off the game-like experience one can have driving with an automobile navigation system, OutRun explores the consequences of using only a computer model of the world as a navigation tool for driving.
More info: http://conceptlab.com/outrun/
23:55 Posted in Augmented/mixed reality | Permalink | Comments (0)
Aug 07, 2013
OpenGlass Project Makes Google Glass Useful for the Visually Impaired
Re-blogged from Medgadget
Google Glass may have been developed to transform the way people see the world around them, but thanks to Dapper Vision’s OpenGlass Project, one doesn’t even need to be able to see to experience the Silicon Valley tech giant’s new spectacles.
Harnessing the power of Google Glass’ built-in camera, the cloud, and the “hive-mind”, visually impaired users will be able to know what’s in front of them. The system consists of two components: Question-Answer sends pictures taken by the user and uploads them to Amazon’s Mechanical Turk and Twitter for the public to help identify, and Memento takes video from Glass and uses image matching to identify objects from a database created with the help of seeing users. Information about what the Glass wearer “sees” is read aloud to the user via bone conduction speakers.
Here’s a video that explains more about how it all works:
13:37 Posted in Augmented/mixed reality, Wearable & mobile | Permalink | Comments (0)
Hive-mind solves tasks using Google Glass ant game
Re-blogged from New Scientist
Glass could soon be used for more than just snapping pics of your lunchtime sandwich. A new game will connect Glass wearers to a virtual ant colony vying for prizes by solving real-world problems that vex traditional crowdsourcing efforts.
Crowdsourcing is most famous for collaborative projects like Wikipedia and "games with a purpose" like FoldIt, which turns the calculations involved in protein folding into an online game. All require users to log in to a specific website on their PC.
Now Daniel Estrada of the University of Illinois in Urbana-Champaign and Jonathan Lawhead of Columbia University in New York are seeking to bring crowdsourcing to Google's wearable computer, Glass.
The pair have designed a game called Swarm! that puts a Glass wearer in the role of an ant in a colony. Similar to the pheromone trails laid down by ants, players leave virtual trails on a map as they move about. These behave like real ant trails, fading away with time unless reinforced by other people travelling the same route. Such augmented reality games already exist – Google's Ingress, for one – but in Swarm! the tasks have real-world applications.
Swarm! players seek out virtual resources to benefit their colony, such as food, and must avoid crossing the trails of other colony members. They can also monopolise a resource pool by taking photos of its real-world location.
To gain further resources for their colony, players can carry out real-world tasks. For example, if the developers wanted to create a map of the locations of every power outlet in an airport, they could reward players with virtual food for every photo of a socket they took. The photos and location data recorded by Glass could then be used to generate a map that anyone could use. Such problems can only be solved by people out in the physical world, yet the economic incentives aren't strong enough for, say, the airport owner to provide such a map.
Estrada and Lawhead hope that by turning tasks such as these into games, Swarm! will capture the group intelligence ant colonies exhibit when they find the most efficient paths between food sources and the home nest.
Read full story
13:12 Posted in Augmented/mixed reality, Creativity and computers, ICT and complexity, Wearable & mobile | Permalink | Comments (0)
Jul 30, 2013
How it feels (through Google Glass)
14:12 Posted in Augmented/mixed reality, Wearable & mobile | Permalink | Comments (0)
Jul 23, 2013
Augmented Reality - Projection Mapping
22:50 Posted in Augmented/mixed reality, Blue sky, Cyberart | Permalink | Comments (0)
Oct 27, 2012
3D Projection Mapping
13:24 Posted in Augmented/mixed reality | Permalink | Comments (0)
Mar 31, 2012
Hyper(reality) - The Last Tuesday Society
Project's description: Embodying the concept theorized by hyperrealism theories, the helmet provides a digital experience, immersing the user in an alternative version of reality seen through the helmet. Instead of having a static point of view, the user becomes able to navigate through the 3D environment enabling new behaviours specific to the hyperreal world while still having to physically interact with the real environment. Thus it creates an odd interface between these two states.
Hyper(reality) - The Last Tuesday Society from Maxence
The suit is composed of an helmet with high definition video glasses, an arduino glove with force sensors controlling the 3D view and a harness for the kinect. Each user experience is recorded and analysed, portraiting user behaviours during the experience. Immersed into this dream-like virtual space, the user gradually discovers the collection of curiosities. Behaviours are being modified, the notion of scale is being distorted, all this pushing the boundaries of the physical space. Venitian masks, stuffed animals and old scultpures start floating in the air around the user creating a new sensorial experience.
13:03 Posted in Augmented/mixed reality, Blue sky | Permalink | Comments (0)
OutRun: Augmented Reality Driving Video Game
12:59 Posted in Augmented/mixed reality, Blue sky | Permalink | Comments (0)
Oct 17, 2010
Mapping virtual content on 3d physical constructions
Via Augmented Times
This video shows the results achieved in the paper "Build Your World and Play In It: Interacting with Surface Particles on Complex Objects" presented at the conference ISMAR 2010 by Brett Jones and other researchers from the University of Illinois. The paper presents a way to map virtual content on 3d physical constructions and "play" with them. Nice stuff.
20:27 Posted in Augmented/mixed reality | Permalink | Comments (0) | Tags: augmented reality, virtual content mapping
Aug 27, 2010
Augmented City
Keiichi Matsuda did it again. After the success of Domestic Robocop, the architecture graduate and filmaker got the nomination for the Royal Institute of British Architects (RIBA) Silver Medal award, for his new video "Augmented City". As in his previous work, in this new video Matsuda describes a future world overlaid with digital information, whose built environment can be manipulated by the individual. In this way, the objective physical world is transformed in a subjective virtual space.
In Matsuda's own words:
Augmented City explores the social and spatial implications of an AR-supported future. 'Users' of the city can browse through channels of the augmented city, creating aggregated customised environments. Identity is constructed and broadcast, while local records and coupons litter the streets. The augmented city is an architectural construct modulated by the user, a liquid city of stratified layers that perceptually exists in the space between the self and the built environment. This subjective space allows us to re-evaluate current trends, and examine our future occupation of the augmented city.
TO CHANGE FROM SPLIT SCREEN TO 3D/2D, CLICK THE '3D' TAB AT THE BOTTOM OF YOUR VIEWER
18:04 Posted in Augmented/mixed reality | Permalink | Comments (0) | Tags: augmented reality, kelichi matsuda
Mar 01, 2010
LevelHead v1.0
19:09 Posted in Augmented/mixed reality, Brain training & cognitive enhancement | Permalink | Comments (0) | Tags: augmented reality, brain training
Augmented Reality Tattoos
Via Sketchin
ThinkAnApp’s augmented reality tattoo transforms into a wing-flapping dragon when viewed via webcam.
18:49 Posted in Augmented/mixed reality | Permalink | Comments (0) | Tags: augmented reality, hyperreality
Concept demo of an AR application
Via Augmented Times
In this concept demo, the user takes a picture of an historical building and sees the image merged with an historical image.
18:43 Posted in Augmented/mixed reality | Permalink | Comments (0) | Tags: augmented reality, hyperreality