Jun 21, 2016
Two good news for Positive Technology followers.
1) Our new book on Human Computer Confluence is out!
2) It can be downloaded for free here
Human-computer confluence refers to an invisible, implicit, embodied or even implanted interaction between humans and system components. New classes of user interfaces are emerging that make use of several sensors and are able to adapt their physical properties to the current situational context of users.
A key aspect of human-computer confluence is its potential for transforming human experience in the sense of bending, breaking and blending the barriers between the real, the virtual and the augmented, to allow users to experience their body and their world in new ways. Research on Presence, Embodiment and Brain-Computer Interface is already exploring these boundaries and asking questions such as: Can we seamlessly move between the virtual and the real? Can we assimilate fundamentally new senses through confluence?
The aim of this book is to explore the boundaries and intersections of the multidisciplinary field of HCC and discuss its potential applications in different domains, including healthcare, education, training and even arts.
Please cite as follows:
Andrea Gaggioli, Alois Ferscha, Giuseppe Riva, Stephen Dunne, Isabell Viaud-Delmon (2016). Human computer confluence: transforming human experience through symbiotic technologies. Warsaw: De Gruyter. ISBN 9783110471120.
Jan 20, 2014
Thalmic Labs at TEDxToronto
Nov 20, 2013
inFORM is a Dynamic Shape Display developed by MIT Tangible Media Group that can render 3D content physically, so users can interact with digital information in a tangible way.
inFORM can also interact with the physical world around it, for example moving objects on the table’s surface.
Remote participants in a video conference can be displayed physically, allowing for a strong sense of presence and the ability to interact physically at a distance.
Jan 27, 2012
A team of three designers, Oleg Imanilov, Zvika Markfield, and Tomer Daniel, have recently developed a novel sign language interpreter glove. Basically, the glove works as an input device from which the user is able to use sign language to create a text message. The hw system consists of AD board, gyroscope, finger sensor, Lilypad Adriano and an accelerometer. The prototype was demonstrated at a recent Google developers event in Tel Aviv, and can be seen in the video below.
Jul 29, 2007
Lomak International Limited was awarded the Gold Prize in the Computer Equipment category in IDSA's 2007 Awards.
Company explains its technology:
Lomak (light operated mouse and keyboard) is designed for people that have difficulty with, or are unable to use, a standard computer keyboard and mouse. A hand or head pointer controls a beam of light that highlights then confirms the key or mouse functions on the keyboard. By confirming each key, only the correct selection is entered, which reduces errors and increases input speed.
In addition to speed and accuracy, Lomak offers a number of advantages over other access methods including;
- versatility and ease of use and training (people can be up and running with it almost immediately)
- it requires no calibration and can operate in any ambient conditions
- it does not require software (i.e. no dedicated computers are required for users with disabilities; converselyusers can log into their own PCs without assistance)
- it does not require any screen area (no on-screen keyboard or mouse menu is required)
- it can be used with any application (e.g. proprietary software such as accounting/payroll applications and other business software)
Lomak is ideal for a work environment as it is easy to install, use and manage. It requires little or no technical support as from a systems perspective it is recognised as simply a USB keyboard and mouse.
Jul 06, 2007
Japanese NTT has unveiled a system that makes three dimensional images solid enough to grasp. The device creates the illusion of depth perception and provides haptic feedback
I believe that among its potential applications, this technology could be effectively used in the rehabilitation of the upper limb following stroke
NTT engineer Shiro Ozawa, who developed the system, envisages various applications. "You would be able to take the hand, or gently pat the head, of your beloved grandchild who lives far away from you," he says.
Anthony Steed, who works with haptic systems at University College London, UK, says the real-time image capture made possible by the Tangible 3D system is especially interesting.
His own research group has performed related work. But this involved connecting a haptic device to a 2D display on which the user's hands are projected, rather than allowing users to manipulate virtual objects directly. He thinks the NTT system could make the interaction feel much more real, although the haptic glove could hinder this.
Steed's group wants to use such technology to make valuable museum exhibits touchable and is working with the British Museum in London towards this goal.
Jan 24, 2007
Seizing the opportunity of its welcome to Grenoble, an historical place in France for innovation in Arts, the 4th International Conference on Enactive Interfaces will be exceptionally extended by an intellectual and artistic event: Enaction_in_Arts.
- 4th International Conference on Enactive Interfaces
In the continuation of previous editions (2004, Villard-de-Lans, France; 2005, Genoa, Italy; 2006, Montpellier, France), Enactive / 07 aims at promoting the concept of Enaction in the field of Information and Communication Technologies. Creative researchers, innovative engineers and producers are invited to confront their last theoretical, experimental, technological and applicated advances during various talk, demo and poster sessions.
Arts and Culture is one of the main fields that are intimately linked with contemporary concepts and technologies. Enaction_in_Arts sessions aim at promoting innovative artistic creations, theories and technologies for the Future of Arts. It will be a unique meeting at the crossing point of Art – Science – Technology and will offer to researchers, engineers and artists the opportunity to discover in the same place, at the same time, cutting-edge research, technologies and artworks centered around Enaction and Enactive Systems.
Deadline for preliminary submission to Enaction_in_Arts extended to January 31, 2007.
Deadline for scientific papers and posters: July 20, 2007
Jan 09, 2007
A model of (en)action to approach embodiment: a cornerstone for the design of virtual environments for learning
Virtual Reality Journal, Springer London, Volume 10, Number 3-4 / December, 2006, Pages 253-269.
Author: Daniel Mellet-d’Huart
This paper presents a model of (en)action from a conceptual and theoretical point of view. This model is used to provide solid bases to overcome the complexity of designing virtual environments for learning (VEL). It provides a common grounding for trans-disciplinary collaborations where embodiment can be perceived as the cornerstone of the project. Where virtual environments are concerned, both computer scientists and educationalists have to deal with the learner/user’s body; therefore the model provides tools with which to approach both human actions and learning processes within a threefold model. It is mainly based on neuroscientific research, including enaction and the neurophysiology of action.
Nov 21, 2006
From the ENACTIVE project website:
The general objective of the ENACTIVE Network is the creation of a multidisciplinary research community with the aim of structuring the research on a new generation of human-computer interfaces called Enactive Interfaces.
Enactive Interfaces are related to a fundamental “interaction” concept which is not exploited by most of the existing human-computer interface technologies. As stated by the famous cognitive psychologist Jerome Bruner, the traditional interaction with the information mediated by a computer is mostly based on symbolic or iconic knowledge, and not on enactive knowledge. While in the symbolic way of learning knowledge is stored as words, mathematical symbols or other symbol systems, in the iconic stage knowledge is stored in the form of visual images, such as diagrams and illustrations that can accompany verbal information. On the other hand, enactive knowledge is a form of knowledge based on the active use of the hand for apprehension tasks.
Enactive knowledge is not simply multisensory mediated knowledge, but knowledge stored in the form of motor responses and acquired by the act of "doing". A typical example of enactive knowledge is constituted by the competence required by tasks such as typing, driving a car, dancing, playing a musical instrument, modelling objects from clay, which would be difficult to describe in an iconic or symbolic form. This type of knowledge transmission can be considered the most direct, in the sense that it is natural and intuitive, since it is based on the experience and on the perceptual responses to motor acts.
Enactive Interfaces are new types of Human-Computer Interface that allow to express and transmit the Enactive knowledge by integrating different sensory aspects.
The driving concept of Enactive Interfaces is then the fundamental role of motor action for storing and acquiring knowledge (action driven interfaces). Enactive Interfaces are then capable of conveying and understanding gestures of the user, in order to provide an adequate response in perceptual terms. Enactive Interfaces can be considered a new step in the development of the human-computer interaction because they are characterized by a closed loop between the natural gestures of the user (efferent component of the system) and the perceptual modalities activated (afferent component). Enactive Interfaces can be conceived to exploit this direct loop and the capability of recognising complex gestures.
The development of such interfaces requires the creation of a common vision between different research areas like Computer Vision, Haptic and Sound processing, giving more attention on the motor action aspect of interaction. An example of prototypical systems that are able to introduce Enactive Interfaces are Reactive Robots, robots that are always in contact with the human hand and are capable of interpreting the human movements and guiding the human for the completion of a manipulation task.