Feb 09, 2014

Nick Bostrom: The intelligence explosion hypothesis

Via IEET

Philosopher Nick Bostrom is a Swedish at the University of Oxford known for his work on existential risk and the anthropic principle covered in books such as Global Catastrophic Risks, Anthropic Bias and Human Enhancement. He holds a PhD from the London School of Economics . He is currently the director of both The Future of Humanity Institute and the Programme on the Impacts of Future Technology as part of the Oxford Martin School at Oxford University.

Aug 07, 2013

Phubbing: the war against anti-social phone use

Via Textually.org

Screen Shot 2013-08-06 at 9.03.46 AM.png

Don't you just hate it when someone snubs you by looking at their phone instead of paying attention? The Stop Phubbing campaign group certainly does. The Guardian reports.

In a list of "Disturbing Phubbing Stats" on their website, of note:

-- If phubbing were a plague it would decimate six Chinas

-- 97% of people claim their food tasted worse while being a victim of phubbing

-- 92% of repeat phubbers go on to become politicians

So it's really just a joke site? Well, a joke site with a serious message about our growing estrangement from our fellow human beings. But mostly a joke site, yes.

Read full article.

Sep 03, 2012

Therapy in Virtual Environments - Clinical and Ethical Issues

Telemed J E Health. 2012 Jul 23;

Authors: Yellowlees PM, Holloway KM, Parish MB

Abstract. Background: As virtual reality and computer-assisted therapy strategies are increasingly implemented for the treatment of psychological disorders, ethical standards and guidelines must be considered. This study determined a set of ethical and legal guidelines for treatment of post-traumatic stress disorder (PTSD)/traumatic brain injury (TBI) in a virtual environment incorporating the rights of an individual who is represented by an avatar. Materials and Methods: A comprehensive literature review was undertaken. An example of a case study of therapy in Second Life (a popular online virtual world developed by Linden Labs) was described. Results: Ethical and legal considerations regarding psychiatric treatment of PTSD/TBI in a virtual environment were examined. The following issues were described and discussed: authentication of providers and patients, informed consent, patient confidentiality, patient well-being, clinician competence (licensing and credentialing), training of providers, insurance for providers, the therapeutic environment, and emergencies. Ethical and legal guidelines relevant to these issues in a virtual environment were proposed. Conclusions: Ethical and legal issues in virtual environments are similar to those that occur in the in-person world. Individuals represented by an avatar have the rights equivalent to the individual and should be treated as such.

Jul 14, 2012

Does technology affect happiness?

Via The New York Times

As young people spend more time on computers, smartphones and other devices, researchers are asking how all that screen time and multitasking affects children’s and teenagers’ ability to focus and learn — even drive cars.

A study from Stanford University, published Wednesday, wrestles with a new question: How is technology affecting their happiness and emotional development?

Read the full article here

http://bits.blogs.nytimes.com/2012/01/25/does-technology-affect-happiness/

Mar 11, 2012

Augmenting cognition: old concept, new tools

The increasing miniaturization and computing power of information technology devices allow new ways of interaction between human brains and computers, progressively blurring the boundaries between man and machine. An example is provided by brain-computer interface systems, which allow users to use their brain to control the behavior of a computer or of an external device such as a robotic arm (in this latter case, we speak of “neuroprostetics”).

 

The idea of using information technologies to augment cognition, however, is not new, dating back in 1950’s and 1960’s. One of the first to write about this concept was british psychiatrist William Ross Ashby.

In his Introduction to Cybernetics (1956), he described intelligence as the “power of appropriate selection,” which could be amplified by means of technologies in the same way that physical power is amplified. A second major conceptual contribution towards the development of cognitive augmentation was provided few years later by computer scientist and Internet pioneer Joseph Licklider, in a paper entitled Man-Computer Symbiosis (1960).

In this article, Licklider envisions the development of computer technologies that will enable users “to think in interaction with a computer in the same way that you think with a colleague whose competence supplements your own.” According to his vision, the raise of computer networks would allow to connect together millions of human minds, within a “'thinking center' that will incorporate the functions of present-day libraries together with anticipated advances in information storage and retrieval.” This view represent a departure from the prevailing Artificial Intelligence approach of that time: instead of creating an artificial brain, Licklider focused on the possibility of developing new forms of interaction between human and information technologies, with the aim of extending human intelligence.

A similar view was proposed in the same years by another computer visionnaire, Douglas Engelbart, in its famous 1962 article entitled Augmenting Human Intellect: A Conceptual Framework.

In this report, Engelbart defines the goal of intelligence augmentation as “increasing the capability of a man to approach a complex problem situation, to gain comprehension to suit his particular needs, and to derive solutions to problems. Increased capability in this respect is taken to mean a mixture of the following: more-rapid comprehension, better comprehension, the possibility of gaining a useful degree of comprehension in a situation that previously was too complex, speedier solutions, better solutions, and the possibility of finding solutions to problems that before seemed insoluble (…) We do not speak of isolated clever tricks that help in particular situations.We refer to away of life in an integrated domain where hunches, cut-and-try, intangibles, and the human ‘feel for a situation’ usefully co-exist with powerful concepts, streamlined terminology and notation, sophisticated methods, and high-powered electronic aids.”

These “electronic aids” nowdays include any kind of harware and software computing devices used i.e. to store information in external memories, to process complex data, to perform routine tasks and to support decision making. However, today the concept of cognitive augmentation is not limited to the amplification of human intellectual abilities through external hardware. As recently noted by Nick Bostrom and Anders Sandberg (Sci Eng Ethics 15:311–341, 2009), “What is new is the growing interest in creating intimate links between the external systems and the human user through better interaction. The software becomes less an external tool and more of a mediating ‘‘exoself’’. This can be achieved through mediation, embedding the human within an augmenting ‘‘shell’’ such as wearable computers (…) or virtual reality, or through smart environments in which objects are given extended capabilities” (p. 320).

At the forefront of this trend is neurotechnology, an emerging research and development field which includes technologies that are specifically designed with the aim of improving brain function. Examples of neurotechnologies include brain training games such as BrainAge and programs like Fast ForWord, but also neurodevices used to monitor or regulate brain activity, such as deep brain stimulators (DBS), and smart prosthetics for the replacement of impaired sensory systems (i.e. cochlear or retinal implants).

Clearly, the vision of neurotechnology is not free of issues. The more they become powerful and sophisticated, the more attention should be dedicated to understand the socio-economic, legal and ethical implications of their applications in various field, from medicine to neuromarketing.


 

May 21, 2011

You are not a gadget

Recently, I came across an intriguing book that brings a new, thought-provoking perspective on how the Internet is shaping our lives and culture. The title of the book is You Are Not a Gadget: A Manifesto and the author is Jaron Lanier, a computer scientist and musician who is best known for his pioneering work in the field of virtual reality.

The leitmotiv of the book can be summarized in a single question: are new technologies really playing an empowering role, by increasing people’s creativity, control, and freedom? As can be expected from the title, the author’s answer is more negative than positive. To construct his argument, Lanier starts from the observation that the evolution of computing is not as free of constraints as one might assume.

As a key example, the author describes the evolution of MIDI, a protocol for composing and playing music on computers. This format emerged in the early 1980s and was immediately recognized as an empowering tool for musicians. However, as more and more people adopted it, it became a rigid standard that limited the expressive potential of artists because, as Lanier points out, it ‘‘could only describe the tile mosaic world of the keyboardist, not the watercolor world of the violin.’’ For the author, this lock-in effect can be seen in other fields of information technology. For example, certain features that were included in the early versions of the UNIX operating system are now deeply embedded in the software and cannot be modified, even if they are considered obsolete or inappropriate. Once an approach becomes standard, it tends to inhibit other solutions, thereby limiting the potential for creativity.

Lanier goes on to demystify some of today’s most popular Internet buzzwords, such as ‘‘Web 2.0,’’ ‘‘Open Culture,’’ ‘‘Mash-Ups,’’ and ‘‘Wisdom of Crowds.’’ He maintains that these trendy notions are ultimately pointing to a new form of ‘‘digital collectivism,’’ which rather than encouraging individual inventiveness, promotes mediocrity and homologation. By allowing everyone to offer up their opinion and ideas, the social web is melting into an indistinct pool of information, a vast gray zone where it is increasingly difficult to find quality or meaningful content. This observation leads the author to the counterintuitive conclusion that the introduction of boundaries is sometimes useful (if not even necessary) to achieve originality and excellence.

Another issue raised by Lanier concerns the risk of de-humanization and de-individualization associated with online social networks. He describes the early Web as a space full of ‘‘flavours and colours’’ where each Web site was different from the others and contributed to the diversity of the Internet landscape. But with the advent of Facebook and other similar sites, this richness was lost because people started creating their personal web pages using predefined templates. On the one hand, this formalism has allowed anyone to create, publish, and share content online easily (blog, video, music, etc.). On the other hand, it has reduced the potential for individuals to express their uniqueness.

Lanier reminds us of the importance of putting the human being, and not the machine, at the center of concerns for technology and innovation. For this goal to be achieved, it is not enough to develop usable and accessible tools; it is also necessary to emphasize the uniqueness of experience. This humanist faith leads the author to criticize the idea of technological Singularity, popularized by recognized experts such as Ray Kurzweil, Vernor Vinge, and Kevin Kelly. This concept holds that exponential increase in computing power and communication networks, combined with the rapid advances in the fields of artificial intelligence and robotics, may lead to the emergence of a super-intelligent organism (the ‘‘Singularity’’), which could eventually develop intentional agency and subordinate the human race. Lanier’s opposition to this idea is based on the conviction that the ‘‘human factor’’ will continue to play an essential role in the evolution of technology. The author believes that computers will never be able to replace the uniqueness of humans nor replicate the complexity of their experience. Further, he considers the concept of technological Singularity culturally dangerous because it enforces the idea of an inevitable superiority of machines over humans: ‘‘People degrade themselves in order to make machines seem smart all the time,’’ writes Lanier.

However, Lanier is genuinely admired by the potential of the Internet and new technologies. This iswhy he calls for a new ‘‘technological humanism’’ able to contrast the overarching vision of digital collectivism and empower creative selfexpression. As a key illustration, the author describes the unique combination of idealism, technical skills, and imaginative talent that, in the 1980s, lead a small group of programmers to conceive the vision of virtual reality. This powerful new paradigm in human–computer interaction inspired in the following decades a number of innovative applications in industry, education, and medicine.

Beside the nostalgic remembrances of the heroic times of Silicon Valley and the sophisticated overtone of some terms (e.g., ‘‘numinous neoteny’’), the book written by Lanier conveys a clear message and deserves the attention of all who are interested in the relationship between humans and technology. The idea that technological innovation should be informed by human values and experience is not new, but Lanier brings it out vividly in detail and with a number of persuasive examples.

More to explore

  • Jaron Lanier’s homepage: The official website of Jaron Lanier, which with its old-fashion style recaptures the freshness and simplicity of the early Internet. The website features biographical information about the author and includes links to a number of Lanier’s articles and commentaries on a number of different technology-related topics. 
  • Kurzweil Accelerating Intelligence: Launched in 2001, Kurzweil Accelerating Intelligence explores the forecasts and insights on accelerating change described in Ray Kurzweil’s books, with updates about breakthroughs in science and technology.
  • Singularity University: Singularity University is an interdisciplinary university founded by Ray Kurzweil and other renowned experts in technology with the support of a number of sponsors (including Google), whose mission is “to stimulate groundbreaking, disruptive thinking and solutions aimed at solving some of the planet’s most pressing challenges”. Singularity University is based at the NASA Ames campus in Silicon Valley.
  • Humanity+: Humanity+ is a non-profit organization dedicated to “the ethical use of technology to extend human capabilities and transcend the legacy of the human condition”. The mission of the organization is to support discussion and public awareness about emerging technologies, as well as to propose solutions for potential problems related to these technologies. The website includes plenty of resources about transhumanism topics and news about upcoming seminars and conferences.

 

Jul 10, 2009

Neuroscience and the military: ethical implications of war neurotechnologies

Super soldiers equipped with neural implants, suits that contain biosensors, and thought scans of detainees may become reality sooner than you think.

In this video taken from the show "Conversations from Penn State", Jonathan Moreno discusses the ethical implications of the applications of neuroscience in modern warfare.

Moreno is David and Lyn Silfen professor and professor of medical ethics and the history and sociology of science at the University of Pennsylvania and was formerly the director of the Center for Ethics at the University of Virginia. He has served as senior staff member for two presidential commissions and is an elected member of the Institute of Medicine of the National Academies.