Imagine yourself stuck in traffic as you drive out of the city for the weekend. Ahead of you someone wants to join the traffic from a side road. Will you stop and let them in or push forward, hoping that someone else will let them in behind you? Whatever you’re thinking, will you do the same if this was a self-driving car with no passengers?
Keywords: human-AI interaction, game theory, cooperation, coordination, trust, benevolent AI
As AI agents acquire capacities to decide autonomously, we switch from being omnipotent users of intelligent machines (e.g., Google Translate) to having to make decisions with or beside them in social interactive settings (e.g., sharing the road with self-driving cars). The impact of this on people’s choice behaviour and, in turn, on the desirability of outcomes of human-AI interaction is yet unknown. In this project we investigate whether and when humans will cooperate with AI for the attainment of mutually beneficial and efficient results.
At this stage we are:
conducting empirical studies, using the methods of behavioural game theory, to investigate whether people are as likely to trust, take risks, and cooperate with AI systems as they do with other humans
Our latest research updates:
Please create your first post in which you describe the current state of your project. Then assign the corresponding project category to your post. (Right sidebar -> document -> categories) Your post will automatically be displayed under your selected project category.Read More
Interested? Find out more:
A preprint of a recent study XX
Some links to the latest developments in XXX