Robots are starting to break the law and nobody knows what to do about it
More than 10 years ago I wrote about this challenge in my doctoral research. At the time it was not yet feasible, but as the story at this link shows, robots can now commit crimes (or at least perform actions that we would consider criminal).
http://fusion.net/story/35883/robots-are-starting-to-break-the-law-and-nobody-knows-what-to-do-about-it/?utm_source=digg&utm_medium=email
The issue that the referenced article doesn't consider is whether the acts are actually criminal. Did the robot have criminal intent or was it just randomized action on the part of a non-conscious machine? While we may consider these actions criminal I doubt the robot had any sense of the difference between the 'criminal purchases' and other randomized 'non-criminal' purchases.
Still, it highlights an interesting ethical issue, what do we do when criminal activities take place by non sentient, self directed, machines or programs? Perhaps at best we could deactivate the machine and aggregate its code or re-program it with more sophisticated coding that takes our sense of criminal activity into account. In more serious cases we could ask whether the creators of the machine or program had criminal intent and pursue them for their intent and action (enacted by proxy through the machine or program).
You can read more about my thoughts and research on these issues (although I did shift from Artificial Intelligence to Neuroscience):
http://www.dionforster.com/blog/tag/neuroscience
This article in Sci-WEB used some of my Artificial Intelligence research:
http://www.dionforster.com/blog/2010/4/14/sci-fi-meets-society-my-artificial-intelligence-research-use.html
Reader Comments