Home » Health Care Robots, Media, News&Mag Articles

the Andersons and the Ethical Robot

Submitted by on November 12, 2010 – 3:52 am | 206 views

For those of you who don’t know the Andersons in the roboethics world, please meet them via the video here.

Susan Anderson and Michael Anderson are a philosopher and a computer scientist (yes, they’re a couple) who has been working together to promote ethics using AI approach for years – they’ve been involved in the field of machine ethics which is a broader field than roboethics. Susan Anderson is a professor at the University of Conneticut and Michael Anderson is an associate professor at the University of Hartford.  Just by skimming through  the publications list under Michael Anderson’s website, you’ll see how active they are in the machine ethics domain. The papers they have published together are literally reaching tens (over 20 since 2005)…

They recently featured an article in the Scientific American, and now there’s a wave of coverage on the work they are doing with the versatile humanoid robot, one of my favourite robots, Nao (Aldebaran Robotics).

What does it do? Here’s a Harry McBrien’s (the reporter of Hartford Science News Examiner) description of their work:

By using information about specific ethical dilemmas supplied to the couple by ethicists, computers can effectively “learn” ethical principles in a process called machine learning. A toddler-sized robot (“Nao”) they have been using in their research has been programmed with an ethical principle that was discovered by a computer. This learned principle allows their robot to determine how often to remind people to take their medicine and when to notify an overseer, such as a doctor, when they don’t comply.

Reminding someone to take their medicine may seem relatively trivial, but the field of biomedical ethics has grown in relevance and importance since the 1960s. And robots are currently being designed to assist the elderly, so the Andersons’ research has very practical implications, the UConn report points out.

Susan says there are several prima facie duties the robot must weigh in their scenario: enabling the patient to receive potential benefits from taking the medicine, preventing harm to the patient that might result from not taking the medication, and respecting the person’s right of autonomy. Theseprima facie duties must be correctly balanced to help the robot decide when to remind the patient to take medication and whether to leave the person alone or to inform a caregiver, such as a doctor, if the person has refused to take the medicine.

Michael says that although their research is in its early stages, it’s important to think about ethics alongside developing artificial intelligence. Above all, he and Susan want to refute the science fiction portrayal of robots harming human beings. “We should think about the things that robots could do for us if they had ethics inside them,” he says. “We’d allow them to do more things for us, and we’d trust them more.”

http://www.examiner.com/science-news-in-hartford/uconn-and-university-of-hartford-research-couple-program-an-ethical-robot