December 15, 2015 – 9:27 amAJungRoboethics(Robot Ethics) Info Databaseuser
Last week was a unique week in the field of roboethics. In the span of a week, two different roboethics-related organizations announced their existence to the world.
Thursday December 10th was the launch of the Foundation …
January 15, 2015 – 5:54 amAJungRoboethics(Robot Ethics) Info Databaseuser
From time to time, people ask me what I think is the best way to ensure that all designers consider roboethics issues in designing their next awesome robotic product.
For example, does Google have a systematic process in place to consider implications of the its self-driving vehicle and related design decisions before the designers start implementing different features in it? Is it even possible to realize a future where all manufactured robotic products meet a kind of ethical standard on top of obvious and existing safety standards?
With open source software and hardware accelerating the landscape of engineering and design forward, and the rise of younger and younger generation of smartphone app developers that makes a boom of robot app developers just an obvious next trend, it may seems impossible to ensure that everyone design robots with ethics in mind.
But I’m an optimist about this, and believe that there are good ways to address this problem. One way is to discuss about the issues of concern as a community so that the community can agree upon a set of values that it chooses to share and foster.
For example, there was much discussion about the dangers of AI in the press lately. Some prominent figures, including Elon Musk and Stephen Hawking, openly voiced their concerns, and others, such as Rodney Brooks and Alan Winfield, openly presented their counter arguments and presented why unnecessarily worrying about the dangers isn’t helpful. There’s a 10 minute BBC debate on this topic if you want a quick overview. Although there are points of disagreement on this issue, a shared point of view by both sides seem to be that AI can and need to be developed/used to make positive impact on our society.
There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.
With a research priorities document (which lists law and ethics research as a priority by the way) included as part of the letter, the authors end the letter by saying:
In summary, we believe that research on how to make AI systems robust and beneficial is both important and timely, and that there are concrete research directions that can be pursued today.
As Computer Scientists, Engineers, Artificial Intelligence experts, Roboticists and professionals from related disciplines, we call for a ban on the development and deployment of weapon systems in which the decision to apply violent force is made autonomously.
Given the limitations and unknown future risks of autonomous robot weapons technology, we call for a prohibition on their development and deployment. Decisions about the application of violent force must not be delegated to machines.
Generating a code of ethics and explicitly stating shared values is something many professional organizations have embraced under the familiar umbrella term of professional ethics. However, as Laurel D. Riek outlines in the podcast interview, there are so many interesting ethical challenges at hand in the field of HRI that is not covered under the existing codes of ethics of professional organizations. On the brink of a variety of robotic products being made available outside research labs and manufacturing facilities, it is important to have discussions about ethical issues unique to HRI and to generate a consensus on the shared values of the HRI community.
Will such initiatives fully address the question we started with? (i.e., “Can we ensure that all designers consider roboethics issues in designing their next awesome robotic product?”) Probably not. But such initiatives will provide a foundation, and perhaps a momentum, for other initiatives to build upon (e.g., regulatory bodies to form), so that we can tackle it from many different angles.
August 21, 2014 – 3:06 pmAJungRoboethics(Robot Ethics) Info Databaseuser
Last week the Waterloo-based Clearpath publicly pledged not to develop lethal autonomous weapons, otherwise known as “killer robots”. In an open letter to the public, Clearpath’s CTO Ryan Gariepy wrote to support the Campaign to Stop Killer Robots, an …
March 11, 2014 – 4:48 pmAJungRoboethics(Robot Ethics) Info Databaseuser
What does it mean to have giants like Google, Apple and Amazon investing in robotics? Since last December, Google alone has acquired a handful of companies in robotics, home automation and artificial intelligence. This can be pretty …
February 12, 2014 – 11:14 pmAJungRoboethics(Robot Ethics) Info Databaseuser
It’s exciting for the robotics community that the giants (Google, Apple, and Amazon) are actively investing in robotics.
Indeed, my initial response to hearing about Google’s first seven of a series of acquisitions of robotics-related companies …
December 11, 2013 – 6:50 amAJungRoboethics(Robot Ethics) Info Databaseuser
2013 was a year filled with talk of drones.
I’m not saying this just because I’m biased by the recent news reporting on how large companies (Amazon, DHL, and UPS to be exact) are exploring the use of drones …
November 29, 2013 – 6:28 amAJungRoboethics(Robot Ethics) Info Databaseuser
Following my Robots Podcast interview with Peter Asaro a few months ago, I had the opportunity to interview another person on a related topic: robots who work with EOD personnel. I spoke with Julie Carpenter, …
May 17, 2013 – 6:23 amAJungRoboethics(Robot Ethics) Info Databaseuser
Earlier this year, there was a very exciting progress on the drone-discussions front. On behalf of Robots Podcast, I spoke with Peter Asaro from The New School in New York city about autonomous weapons systems. Peter spoke about …
May 6, 2013 – 11:03 pmAJungRoboethics(Robot Ethics) Info Databaseuser
At We Robot 2013 Diana Cooper, a JD Candidate at the University of Ottawa, presented her attempt to tackle the open source headache by proposing a new license called the Ethical Robot License (ERL). In her paper, A Licensing Approach to Regulation of Open Robotics, Cooper presents ERL as “a licensing approach to allocate liability between manufacturers and users and promote ethical and non-harmful use of open robots”.
April 18, 2013 – 1:49 amAJungRoboethics(Robot Ethics) Info Databaseuser
Last week, Robot Block Party 2013 took place right after We Robot conference.
Of course, I had an extra day to spend at Stanford University after the conference and couldn’t miss out on the event.
March 26, 2013 – 1:36 pmAJungRoboethics(Robot Ethics) Info Databaseuser
Robot Futures is a new book written by Dr. Illah Nourbakhsh, a professor at Carnegie Mellon University who has been teaching roboethics at the university for many years. According to Dr. Noel Sharkey, this book is “[a]n exhilarating dash into the future of robotics from a scholar with the enthusiasm of a bag of monkeys. It is gripping from the start with little sci-fi stories in each chapter punching home points backed up forcefully by factual reality. This is an entertaining tour de force that will appeal to anyone with an interest in robots.”