Ok

By continuing your visit to this site, you accept the use of cookies. These ensure the smooth running of our services. Learn more.

Jul 03, 2006

Ethics of Persuasive Technology

BJ Fogg has posted to his blog "Captology Notebook" an unpublished article on the ethics of persuasive technology that is really worth reading... 

 

ANALYZING THE ETHICS OF PERSUASIVE TECHNOLOGY

2005

 

DANIEL BERDICHEVSKY
B.J. FOGG
RAMIT SETHI
MANU KUMAR

Stanford Persuasive Technology Lab

Authors' addresses: Daniel Berdichevsky, 1006 Wall St. Los Angeles, CA 90015; B.J. Fogg, Ramit Sethi and Manu Kumar, Persuasive Technology Lab, Stanford University, Box 20456, Stanford, CA 94309
________________________________________________________________________


1. INTRODUCTION
When a doctor tells you your blood pressure is too high, you may modify your lifestyle to compensate: less salt, more exercise, fewer freedom fries. In a sense, the doctor has committed a persuasive act. We are unlikely to question the ethics of her having done so. But what if the doctor were out of the picture? Suppose a device at your bedside not only informed you of your blood pressure each morning, but shifted colors to convey the likelihood of your suffering a heart attack. Would such a persuasive technology be as ethical as a doctor giving you advice in her office? What if it were co-marketed with a treadmill?

The objective of this article is to describe and test a framework for analyzing the ethics of technologies that change people’s attitudes and behaviors. We focus on computerized persuasive technologies—the field known as “captology”—though similar considerations apply to non-computerized technologies, and even to technologies that change attitudes and behaviors by accident.

This idea of unintended persuasive outcomes is one reason we are revisiting this topic four years after the publication of an earlier piece by one of the authors, “Toward an Ethics of Persuasive Technology.” [Berdichevsky & Neuenschwander, 1999] Since 1999, persuasive elements have become common enough in both hardware and software—and especially on the web—that designers may not always be conscious of their persuasive nature. They may take them for granted. This telephone beeps to remind you to check your voice mail; that search engine changes its logo every day in a continuing narrative to pull you back more often . But even when persuasion is incidental to other design motives, it requires ethical attention—particularly because end users may not be expecting it and are therefore caught with their defenses down. [Fogg, 2002]

Why pay special attention to computerized persuasive technologies? The chief reason is that while non-computerized technologies can certainly be persuasive, only rarely can they stand on their own . A television with no infomercial to display will not convince you to buy new cutlery. A whip without someone to wield it will cow no slave into obedience. Even a carpool lane requires enforcement.

What makes computerized persuasive technologies more interesting than these examples is that they can persuade independently. Computerized persuasive technologies are also dynamic, changing in response to different users and their inputs. They allow persuasion—and the persuasive experience—to be simultaneously mass-manufactured and user-specific. For instance, a wristwatch that encourages you to keep running by congratulating you on the specific number of calories you have burned is completely self-contained. This leads to questions of agency: do you blame the wristwatch if a runner suffers a heart attack trying to achieve a certain pulse?

The full article can be accessed here 

The comments are closed.