can we make robots ethical?

Our major concern for the future of robots and AI, are robots becoming crappy because its creators, us fickle humans, are crappy. Like accidentally swearing in front of your 1 year old and it’s first word being #$%&, we worry about passing on our least favourable qualities to our creations. The last thing anyone wants is a robot with anxiety or a god complex.

In order to exist, they need to be safe for humans and not a threat to us, and give them the qualities that won’t bring about the robopocalypse. We need to give them ethics.

“What happens, when these robots are forced into making ethical decisions….
…a robot left in an impossible double blind, how could it possibly equip an automated intelligence to cope with this type of complexity?”
“Can Robots Be Ethical” 
Waleed Aly and Scott Stephens, ABC Radio

Ethics is essentially dividing everything in the world into two categories: things that are right and things that are wrong. This seems fairly simple to translate into robots. Killing human = Bad, giving human coffee = Good. However, we also know ethics isn’t always that black and white. Ethics varies from person to person on what we view is acceptable, and varies even greater over cultures. Ethics involves a constant dialogue, between ones self and others, as we are forced to make these categories. Do I save a family from a burning building I have just walked past? Or continue with my mission to buy food? What would a robot do in this situation? Can it override it’s task once it is faced with an ethical decision? Should a driverless car have the opportunity to be programmed to swerve to hit one person rather than two? Is that the ethical choice? (Newman, 2016)

An article published in Nature debated logic is how we debate and decide on ethics, and thus a logical program would be how we give robots ethics (Deng 2015). Logic is based off intelligence, so how much intelligence does a robot need for logic, for ethics? UK roboticist Alan Winfield, put a robot to the test of saving a human from falling off a cliff (real humans and real cliffs were not used in this experiment). The Test subject saved its human every time with minimal logic, programmed in with Asimov’s 1st law to protect itself from the cliff to save the other ‘human’ robots. When challenged to save two at the same time, and saved at least one but would go into a “dither” and sometimes save neither. The Dither meant it required more choices and decisions in order to act, ie if one human was a child or adult, which do i save first? Even humans in this situation wouldn’t know what to pick.

Michael Fisher, a UK computer scientist posed an ethically bound system of how robots can function is paramount to reassuring the public that are “scared of robots when they are not sure what is is doing, or will do” in a given situation” (Newman, 2016). However, there will always still be risk of harm to those around the robot, or the driverless car who must swerve to avoid hitting someone and into the path of another vehicle. This kind of robot error is no different than human error, when forced into a double blind.

Is having ethical robots the key to harmonious interactions between robots and humans? Having some degree of ethics instilled in robots would mean they need the autonomy to make the decisions on their own as the situations arose. Would programmed ethics clash with Asimov Laws, when a robot cannot fulfil saving humans without endangering itself? What if the robot chooses wrong, and does not ethically act? As Waleed Aly and Scott Stephens discussed, ethics are implied as rules, and to be used in certain situations (2016).

How far would robots go in their logic of ethical choices, to maximise happiness for all humans? Harvesting all the organs of one man to save five? Robots reaching the Singularity is a moment discussed in media and pop-culture, and truly surpassing us. What then will it mean for ethics, without the humans around to define it? People naturally follow social conventions, and is an element robots will always struggle to understand. The balance of ethics and morality, of self preservation and acting selflessly, are all perilous and susceptible to the human condition and how we act in our world (Caplan 2015). Is that balance something that can be translated into code, and could robots ever follow in our footsteps?

The_Descartes_System
via PhilosophyNow

References & Further Viewing

http://www.nature.com/news/machine-ethics-the-robot-s-dilemma-1.17881

https://philosophynow.org/issues/110/Can_Robots_Be_Ethical

http://www.nature.com/news/machine-ethics-the-robot-s-dilemma-1.17881

Can we trust robots to make moral decisions?

http://theconversation.com/you-say-morals-i-say-ethics-whats-the-difference-30913

https://radio.abc.net.au/programitem/pglxVLWkb6?play=true

3 thoughts on “can we make robots ethical?

  1. Amazing blog post! Your use of sources and readings puts me to shame; but there’s one perspective you’re missing – what happens when AI is intelligent enough to develop its own code of ethics?

    http://www.iflscience.com/rise-machines-robot-demonstrates-self-awareness-solving-logic-puzzle

    ^ That is a link to one of the most horrifying videos I’ve seen on the internet. The very same as the ‘ethical’ robot in your embedded video, however the 3 in my video are posed somewhat of a trick question – with the logical answer actually being wrong.

    What’s horrifying is that the intelligence of the AI is great enough to actually see past it’s original programming and the user’s request, resulting in the AI retrospectively being able to answer the question correctly, after it recognises the process of thought it had been falsely lead into.

    I guess the point I’m trying to make is: How many people will die in a driverless car accident if the car can place value on it’s own existence?

    Liked by 1 person

  2. Awesome blog post! I think you make a really great point in saying our biggest fear is robots becoming crappy because us humans are inherently crappy. This fear of robots being evil and taking over the world is derived from a fear of ourselves. There are evil humans out there and as technology advances I feel our biggest threat will be these evil humans creating evil robots. Ethical robots is such a tricky task because even humans don’t know what to do in some situations. How can we program a robot to do the ‘right’ thing when we sometimes don’t even know what the right thing is?

    Liked by 1 person

  3. Ethics in robots I think will be what decides whether or not to integrate them into daily life, on the level of something like iRobot (more of the first 5 minutes when no one is getting killed haha). I remember Chris talking about how every piece of software has ethics programmed into it, through the programmers themselves. I don’t think this is the solution for robots, because like you said, everyone has a different idea of right and wrong, so who gets to decide what gets programmed in?
    I would hope that we would depend on the growth and intuition of robots to decide what is right and wrong, but then again, that could lead to a ‘Chappie’ situation, which would not be ideal!

    Liked by 1 person

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s