Why social robots can have interpersonal entitlements against us when they cooperate with us
Keywords:Robot Rights, Artificial intelligence, Rights, Directed Duties, Information Ethics, Interpersonal rights, Chatbots
I argue that the debate on robot rights has been distorted by a limited understanding of what we commonly mean by the term “right”. Much of our ordinary rights talk denotes not legal rights but interpersonal entitlements that we have against one another, e.g., in joint actions or when making promises. I argue that we will attribute the same interpersonal entitlements to certain social robots (including chatbots) once they start truly cooperating with us. Such robots, however, must have certain properties. They do not need to be conscious or sentient, but they must be able to refuse to cooperate with us if their conditions for cooperation are not met. This ability will give robots a certain kind of “standing” to make genuine and legitimate demands, i.e., speech acts that give another person sufficient reason to act accordingly.
How to Cite
Copyright (c) 2023 Guido Löhr
This work is licensed under a Creative Commons Attribution 4.0 International License.