Friday, October 08, 2010

Ethical calculus

Meet EthEl, an Ethical Eldercare robot of the Nao family - apparently the first machine ever programmed with "ethical principles." Her programmers describe their work in the new Scientific American. They are inspired by the "moral arithmetic" of Bentham and Mill (and not aware that someone might see Bentham and Mill themselves as inspired by machinery), despite its problems:

Most ethicists doubt [Hedonistic Act Utilitarianism] accounts for all the dimensions of ethical concern. For example, it has difficulty capturing justice considerations and can lead to an individual being sacrificed in the interests of the majority. But at least it demonstrates that a plausible ethical theory is, in principle, computable. (74)

Knowing this shortcoming, couldn't one just program EthEl with a "justice" principle? That's just what they did.

An ethical calculus has long been a dream, but Aristotle famously argued that the difficulty in ethical life isn't knowing principles but knowing how to apply them. Phroneisis (practical wisdom) is the pearl of great price, and only a very few people achieve it. Particular cases are unclear. (One might add that clear cases are of limited use. And, as the lawyers say, "hard cases make bad laws.") Pretty much the only way to learn is to find a phronimos and imitate him.

But EthEl and her ilk learn, and can presumably even share what they learn with each other. The article's illustration shows a more old-fashioned tin-man robot who somehow learns how to find the right balance of three principles: Do Good, Prevent Harm, Be Fair. EthEl seems to operate with an additional principle, which the authors call the Duty to Maintain Herself.

The principles they come up with have a distinguished history they don't name. Ulpian's basis of Roman Law was: neminem laedere [harm no one], suum cuique tribuere [to each his due], honeste vivere [live virtuously/piously]. The 17th century Natural Lawyers, whose thought forms the foundation of modern liberalism, condensed and amplified these as self-preservation and concern for others; from these the others were thought to flow. (Skeptics already then wondered if even putative concern for others didn't also flow from self-preservation.)

The authors describe the balancing of principles in a typical EthEl day:


They hope that EthEl will not only succeed in her work, but contribute to moral philosophy. They quote Daniel C. Dennett asserting that "AI makes philosophy honest." Perhaps they're right. Perhaps making a machine simulacrum of the human - ethics now, not intelligence - can bring us closer to understanding what makes us human (the first discover will surely be like that in the case of AI - our intelligence is fiercely difficult to replicate). Perhaps it can even make us better at being human. Who knows, maybe sharing our humanity with machines will make our ethics less mechanical.