ASIMOV’S THREE LAWS OF ROBOTICS: ARE THEY IMPLEMENTABLE?

As you may or may not know, Isaac Asimov was  a key figure in the history of Artificial intelligence and was famous, not only for his sci-fi literature, but also for his creation of the three laws of robotics. therefore, as part of my research into artificial intelligence, I thought that this would be a great place to start. So, for this week’s blog post, I have decided to conduct a literature review on Lee McCauley’s journal article ” The Frankenstein Complex and Asimov’s Three Laws” in order to form a strong foundation for further research which will then be presented through a series of podcasts.

As I started reading McCauley’s article, I immediately thought that it was far too one sided. It is written in a way that completely rules out  the possibility of the three laws ever being implemented, should sentient beings ever become part of society in the future. So, in that regard, a counter-argument that embraces the possibility of those laws being implemented, would have immensely improved the interest of the article. Having said that, McCauley’s arguments are very well structured and incredibly well researched. McCauley makes an interesting point when he states that “we are asking that our future robots be more than human-they must be omniscient” (2007, pg.11).The idea that we imagine future robots to make the most logical decisions and still maintain a sense of humanity is a fascinating one which I will explore to a greater extent in my digital artifact. This point is raised in response to a statement made by David Bourne a robotics scientist in California who is quoted in the article.

Additionally, one of the most interesting aspects of McCauley’s article is the age-old idea of “The Frankenstein Complex” which refers to the fear of man attempting to play god by creating life but then vilifying their creation. This idea is, for me, the most interesting aspect regarding Artificial Intelligence and is in the background of the film “Ex-Machina” and countless other science-fiction films.

In conclusion, Lee McCauley’s article on Asimov’s law of robotics and “the Frankenstein complex” was a great place for me to start my research into artificial intelligence, despite the article’s shortcomings. I look forward to update you on my research on this fascinating area of cyberculture.

References

McCauley, L. 2007, “The Frankenstein Complex and Asimov’s three laws”, AAAI Workshop – Technical Report, pgs. 9-14

 

5 thoughts on “ASIMOV’S THREE LAWS OF ROBOTICS: ARE THEY IMPLEMENTABLE?

  1. I’ve read the article and you are right, it was a very interesting read but it was rather monolingual. In my opinion, I find Asimov’s Laws of Robotics incredibly one sided. It details exactly what robots can and can’t do, but not what humans can do to the robots. This is the basis of my next research post because I thought, robots cannot break these laws, or they will be punished or deemed errors/flaws. But humans harm, injure and do what it takes to protect itself at the expense of the robot. when we reach that road, or if we do, when we must decide whether robots can be a part of our society by asking and commanding them to be human. Be like us or be excluded by us, essentially. I’d like to hear more from you about the implementation of the laws and the future society of Robots And Us, I think your topic and research is on the right track.

    Like

  2. I think Asimov’s 3 laws of robotics as you said are a little one sided, due to the fact that they’re one day going to be sentient beings, which attributes them to civil rights. Being a slave really doesn’t fuel any willingness to help humans – and it’s here where I actually get that the army of robots in iRobot revolt against the human population. IF their brains are advanced enough, I’m sure they can comprehend emotion. Maybe one of those evil robots actually just wants to be cooked dinner for once? I guess we should theoretically perceive robots as a being with emotional capacity.

    Like

  3. It’s interesting the point you bring up about needing these AI’s to reflect humanity, while also needing them to make the most logical decision. These types of moral and ethical decisions I think are so hard to program, and the answers would be reflected by the morals of the programmers themselves. I think this is one of those case’s where we won’t know how they will react until the event happens, or until we can simulate the event and observe how it reacts.

    The best example that springs to mine is the current argument surround the AI’s of cars. If a crash is inevitable, killing three people in the car in front, does the car swerve into a pedestrian on the side-walk, or does it kill the more people in the car in front? Great article about that here: https://www.technologyreview.com/s/542626/why-self-driving-cars-must-be-programmed-to-kill/

    This is definitely one of those cases where i’m so glad i’m not the one programming these AI’s, and can sit back and see how the world deals with the issue.

    Like

  4. McCauley’s argument that states “we are asking that our future robots be more than human-they must be omniscient” is entirely new and interesting to me. I haven’t considered this perspective in regards to A.I, and I feel it has definitely helped me understand A.I and the surrounding views and concerns. The fact that we are expecting a perfect, all knowing robot that still somewhat maintains humanity is, as you said fascinating, but perhaps is also the reason for societies fears of A.I and hence the reason for the sort of rejection we see towards A.I in western societies, which is portrayed in articles such as this: http://www.bbc.com/news/technology-32334568

    Like

  5. I find “The Frankenstein Complex” really interesting because we see people saying we need artificial intelligence to improve humanity but there haven’t really been any limits set as to how far we can go. When will we start regretting these decisions? I recently watched a video on Facebook about a robot created named Sophia, which is said to be one of the most advanced to date, created by Dr. David Hanson. Sophia has cameras in her eyes to remember and mimic human facial expressions and interactions as to advance and improve over time. In the video Dr. Hansen makes conversations with Sophia who makes strange but (almost) human like expressions. He then “jokingly” asks her if she wants to destroy humans, to which she replies, “Okay I will destroy humans”. They laugh it off as a joke but it’s very creepy nonetheless. This article (http://www.sott.net/article/314732-Rise-of-the-machines-Robot-jokes-that-she-will-kill-humans) explores the video in more detail and talks about cybernetic revolt, which basically says robots will take over and destroy humanity. This could possibly be another theory you look into.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s