Friday, September 27, 2019
Ethics for a society of humans and automatons Essay
Ethics for a society of humans and automatons - Essay Example Forester and Morrison strongly suggest that ââ¬Å"computer system have often proved to be insecure, unreliable, and unpredictable and that society has yet to come to terms with the consequencesâ⬠¦.society has become newly vulnerable to human misuse of computers in the form of computer crime, software theft, hacking, the creation of viruses, invasion of privacy, and so onâ⬠(ix). The ethical dilemmas however do not rise simply for the fact that there are risks involved with the automatons. More than risks, when the automatons become largely entwined in the daily lives human beings on the earth, we have to deal with many more complex issues which ethically challenge the governance of such a world. Allen, Wallach and Smitt are of the view that ââ¬Å"we canââ¬â¢t just sit back and hope things will turn out for the best. We already have semiautonomous robots and software agents that violate ethical standards as a matter of course. A search engine, for example, might collect data thatââ¬â¢s legally considered to be private, unbeknownst to the user who initiated the queryâ⬠(12). Three Laws of Robotics While we regard ethics in terms of automaton, it is necessary to look at Issac Asimovââ¬â¢s three laws of robotics. These laws were delineated in his famous 1942 short story ââ¬ËRunaroundââ¬â¢. ... It means if a robot wants to protect in a given situation, it shall not be at the expense of harm to human beings. The ethical laws pertaining to moving machines are considered to be mechanical. Ethics is considered by definition to be anthropocentric. Ethics involves ruminations on living a life which is worthy to live. Asimovââ¬â¢s three laws are an important starting point in understanding machine ethics: ââ¬Å"1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given it by human beings except where such orders would conflict with the first law. 3. A robot must protect its own existence as long as such protection does not conflict with the first or second lawâ⬠(as quoted in Anderson, 477-78). These laws as originally proposed by Asimov imagine automatons as slaves of human beings. Moreover, they are not even considered to be able to exit relatively independent of human beings. Asimov has â⠬Å"provided an explanation for why humans feel the need to treat intelligent robots as slaves, an explanation that shows a weakness in human beings that makes it difficult for them to be ethical paragons. Because of this weakness, it seems likely that machines like Andrew could be more ethical than most human beingsâ⬠argues Anderson (478). However, in the present world, the complex interactions take place between humans and automatons take us beyond the purview of these three laws concerning ethical governance of mechanized world. Altering the Ethical Man Albert Einstein put forward the question ââ¬Å"Did God have any choice?â⬠as the big question faced by humanity. In a society of automata, human beings are faced with another question. Did human beings have any choice?
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.