top of page

Human-AI Interaction

We study how to minimize the friction during the interaction between humans and intelligent systems/agents 


Interaction with Task-Oriented Agents

Although they are marketed as intelligent, task-oriented agents can still make errors. Our objective is to explore ways to minimize the difficulties people encounter during interaction with these agents. We aim to investigate how to help users recover from interaction failures, how to guide them in using these agents, and how to set appropriate expectations for their performance and capabilities.

Adoption of and Trust in Agents

Intelligent agents are widely used in a variety of domains and applications. Users, on the other hand, do not always accept or trust the outputs of these agents, such as recommendations or suggestions. Our goal is to identify the critical factors that influence users' adoption and trust in these agents' outputs.

bottom of page