"I think there are varying degrees of moral agency, ranging from amoral agents to fully autonomous moral agents. Our current robots are between these extremes, though they definitely have the potential to improve.
I think we are now starting to see robots that are capable of taking morally significant actions, and we're beginning to see the design of systems that choose these actions based on moral reasoning. In this sense, they are moral, but not really autonomous because they are not coming up with the morality themselves... or for themselves.
They are a long way from being Kantian moral agents –- like some humans –- who are asserting and engaging their moral autonomy through their moral deliberations and choices. [Philosopher Immanuel Kant's "categorical imperative" is the standard of rationality from which moral requirements are derived.]
We might be able to design robotic soldiers that could be more ethical than human soldiers. "
- Can "Terminators" Actually be our Salvation?A Conversation with Peter Asaro.
Categorical Imperative(s) @ Wikipedia:
"Act only according to that maxim whereby you can at the same time will that it should become a universal law."
"Act in such a way that you treat humanity, whether in your own person or in the person of any other, always at the same time as an end and never merely as a means to an end."
"Therefore, every rational being must so act as if he were through his maxim always a legislating member in the universal kingdom of ends."
No comments:
Post a Comment