Headline »


Two new organizations on responsible AI and robotics. Do we need them?

December 15, 2015 – 9:27 am

Last week was a unique week in the field of roboethics. In the span of a week, two different roboethics-related organizations announced their existence to the world.
Thursday December 10th was the launch of the Foundation …

Read the full story »
Human-Robot Interaction

The way robots interact with us and related issues

Health Care Robots

Latest news on health care robots and related issues

Military Robots

Latest news on military robots, autonomous and tele-operated systems

Robots in the Wild

Catch-all category covering robots for your home to those still hanging out at labs


Commentaries by AJung on anything and everything robots/roboethics

Commentary, Human-Robot Interaction, Podcast »

Fostering a culture of shared values: designing with ethics in mind

January 15, 2015 – 5:54 am


From time to time, people ask me what I think is the best way to ensure that all designers consider roboethics issues in designing their next awesome robotic product.

For example, does Google have a systematic process in place to consider implications of the its self-driving vehicle and related design decisions before the designers start implementing different features in it? Is it even possible to realize a future where all manufactured robotic products meet a kind of ethical standard on top of obvious and existing safety standards?

With open source software and hardware accelerating the landscape of engineering and design forward, and the rise of younger and younger generation of smartphone app developers that makes a boom of robot app developers just an obvious next trend, it may seems impossible to ensure that everyone design robots with ethics in mind.

But I’m an optimist about this, and believe that there are good ways to address this problem. One way is to discuss about the issues of concern as a community so that the community can agree upon a set of values that it chooses to share and foster.

For example, there was much discussion about the dangers of AI in the press lately. Some prominent figures, including Elon Musk and Stephen Hawking, openly voiced their concerns, and others, such as Rodney Brooks and Alan Winfield, openly presented their counter arguments and presented why unnecessarily worrying about the dangers isn’t helpful. There’s a 10 minute BBC debate on this topic if you want a quick overview. Although there are points of disagreement on this issue, a shared point of view by both sides seem to be that AI can and need to be developed/used to make positive impact on our society.


A few days ago, we saw that some of the above-mentioned and other individuals have written and signed an open letter hosted by the Future of Life Institute, which reads:

There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.

With a research priorities document (which lists law and ethics research as a priority by the way) included as part of the letter, the authors end the letter by saying:

In summary, we believe that research on how to make AI systems robust and beneficial is both important and timely, and that there are concrete research directions that can be pursued today.

Similarly, there have been efforts by members of the roboethics community who suggest a model of fostering a culture of shared values. For example, Laurel D. Riek and Don Howard presented their paper “A Code of Ethics for Human-Robot Interaction Profession” at We Robot 2014 conference last year, which included a draft code of ethics for human-robot interaction (HRI) practitioners. I had the pleasure of talking to Laurel D. Riek about it more for Robots Podcast (take a listen below if you’ve missed it).

Another example is the International Committee for Robot Arms Control (ICRAC) and its The Scientists’ Call To Ban Autonomous Lethal Robots which states:

As Computer Scientists, Engineers, Artificial Intelligence experts, Roboticists and professionals from related disciplines, we call for a ban on the development and deployment of weapon systems in which the decision to apply violent force is made autonomously.

Given the limitations and unknown future risks of autonomous robot weapons technology, we call for a prohibition on their development and deployment. Decisions about the application of violent force must not be delegated to machines.

This movement by ICRAC, along with the work by member organizations of the Campaign to Stop Killer Robots have, in part, led to Clearpath Robotics, a robotics company, releasing its open letter in support of the campaign.

Generating a code of ethics and explicitly stating shared values is something many professional organizations have embraced under the familiar umbrella term of professional ethics. However, as Laurel D. Riek outlines in the podcast interview, there are so many interesting ethical challenges at hand in the field of HRI that is not covered under the existing codes of ethics of professional organizations. On the brink of a variety of robotic products being made available outside research labs and manufacturing facilities, it is important to have discussions about ethical issues unique to HRI and to generate a consensus on the shared values of the HRI community.

Hence, it’s perhaps timely that more HRI practitioners get engage in such discussions. One such venue is an upcoming workshop that Laurel D. Riek, Woodrow Hartzog, Don HowardRyan Calo, and myself are organizing as part of the upcoming HRI’15 conference (March 2nd) in Portland, Oregon, called The Emerging Policy and Ethics of Human Robot Interaction.

Will such initiatives fully address the question we started with? (i.e., “Can we ensure that all designers consider roboethics issues in designing their next awesome robotic product?”) Probably not. But such initiatives will provide a foundation, and perhaps a momentum, for other initiatives to build upon (e.g., regulatory bodies to form), so that we can tackle it from many different angles.

Clearpath, Killer Robots, and Why their Statement Matters

August 21, 2014 – 3:06 pm

Last week the Waterloo-based Clearpath publicly pledged not to develop lethal autonomous weapons, otherwise known as “killer robots”. In an open letter to the public, Clearpath’s CTO Ryan Gariepy wrote to support the Campaign to Stop Killer Robots, an …

Robots Podcast: Avner Levin on Privacy, Google, and big deals

March 11, 2014 – 4:48 pm
Avner Levin

What does it mean to have giants like Google, Apple and Amazon investing in robotics? Since last December, Google alone has acquired a handful of companies in robotics, home automation and artificial intelligence. This can be pretty …

What does it mean to have giants like Google, Apple and Amazon investing in robotics?

February 12, 2014 – 11:14 pm

It’s exciting for the robotics community that the giants (Google, Apple, and Amazon) are actively investing in robotics.
Indeed, my initial response to hearing about Google’s first seven of a series of acquisitions of robotics-related companies …

What were the top stories in robotics from 2013?

December 11, 2013 – 6:50 am

2013 was a year filled with talk of drones.
I’m not saying this just because I’m biased by the recent news reporting on how large companies (Amazon, DHL, and UPS to be exact) are exploring the use of drones …

Robots Podcast: Julie Carpenter on Working with EOD Personnel

November 29, 2013 – 6:28 am
Julie Carpenter

Following my Robots Podcast interview with Peter Asaro a few months ago, I had the opportunity to interview another person on a related topic: robots who work with EOD personnel. I spoke with Julie Carpenter, …

Do robots need heads?

August 15, 2013 – 7:00 am
Baxter from Rethink Robotics

Are you curious about what your future robotic assistants will look like?
My bet is that by the time you buy your very first robotic butler, it will have a friendly head on it that moves. …

Robots Podcast: Peter Asaro on Autonomous Weapons

May 17, 2013 – 6:23 am
Peter Asaro

Earlier this year, there was a very exciting progress on the drone-discussions front. On behalf of Robots Podcast, I spoke with Peter Asaro from The New School in New York city about autonomous weapons systems. Peter spoke about …

The Ethical Robot License – Tackling open robotics liability headaches

May 6, 2013 – 11:03 pm
Image from http://presta-ecommerce.com/

At We Robot 2013 Diana Cooper, a JD Candidate at the University of Ottawa, presented her attempt to tackle the open source headache by proposing a new license called the Ethical Robot License (ERL). In her paper, A Licensing Approach to Regulation of Open Robotics, Cooper presents ERL as “a licensing approach to allocate liability between manufacturers and users and promote ethical and non-harmful use of open robots”.

Video interviews from Robot Block Party 2013

April 18, 2013 – 1:49 am
2013-04-10 13.47.34

Last week, Robot Block Party 2013 took place right after We Robot conference.
Of course, I had an extra day to spend at Stanford University after the conference and couldn’t miss out on the event.
The …

New Book: “Robot Futures” by Illah Reza Nourbakhsh

March 26, 2013 – 1:36 pm

Robot Futures is a new book written by Dr. Illah Nourbakhsh, a professor at Carnegie Mellon University who has been teaching roboethics at the university for many years. According to Dr. Noel Sharkey, this book is “[a]n exhilarating dash into the future of robotics from a scholar with the enthusiasm of a bag of monkeys. It is gripping from the start with little sci-fi stories in each chapter punching home points backed up forcefully by factual reality. This is an entertaining tour de force that will appeal to anyone with an interest in robots.”