Why social robots can have interpersonal entitlements against us when they cooperate with us

Authors

  • Guido Löhr Vrije Universiteit Amsterdam

Keywords:

Robot Rights, Artificial intelligence, Rights, Directed Duties, Information Ethics, Interpersonal rights, Chatbots

Abstract

I argue that the debate on robot rights has been distorted by a limited understanding of what we commonly mean by the term “right”. Much of our ordinary rights talk denotes not legal rights but interpersonal entitlements that we have against one another, e.g., in joint actions or when making promises. I argue that we will attribute the same interpersonal entitlements to certain social robots (including chatbots) once they start truly cooperating with us. Such robots, however, must have certain properties. They do not need to be conscious or sentient, but they must be able to refuse to cooperate with us if their conditions for cooperation are not met. This ability will give robots a certain kind of “standing” to make genuine and legitimate demands, i.e., speech acts that give another person sufficient reason to act accordingly.

Published

2023-10-20

How to Cite

Löhr, G. (2023). Why social robots can have interpersonal entitlements against us when they cooperate with us. ROBONOMICS: The Journal of the Automated Economy, 4, 39. Retrieved from https://journal.robonomics.science/index.php/rj/article/view/39