As AI systems increasingly participate in human activities, understanding how people interact with artificial agents has become a crucial scientific challenge. Human–AI interaction raises questions about trust, interpretation, coordination, decision making, and cognitive compatibility.
Research in this axis investigates how humans reason about AI systems, how they interpret machine behavior, and how misunderstandings or biases can arise during interaction. Experimental paradigms adapted from cognitive science are used to compare human and artificial decision processes, including reasoning, learning, and moral judgment. This work helps identify points of convergence and divergence between human cognition and artificial systems.
In applied settings, studies examine how users interact with automated tools and conversational agents in real-world environments, including educational and clinical contexts. The goal is to establish interaction principles that are safe, interpretable, and aligned with human cognitive capacities, thereby supporting the design of AI systems that integrate smoothly and responsibly into human workflows.