Research in this axis focuses on the design, training, and analysis of artificial intelligence systems, often inspired by principles from cognitive science and neuroscience.
Historically, insights about human learning, decision making, and representation have played a major role in the development of AI, from early expert systems to reinforcement learning and modern deep neural architectures. This tradition continues today.
Current work develops computational architectures, learning algorithms, and simulation frameworks that aim to improve the flexibility, robustness, and interpretability of AI systems. This includes the use of neural networks trained on cognitive tasks, multi-agent simulations that model collective or adaptive behavior, and large language models whose training and tuning are optimized to support higher-level, meta-cognitive performance.
In addition to building AI systems, this research seeks to understand how and why these systems work. Analytical tools are developed to evaluate performance, compare models, detect biases or limitations, and characterize internal representations. By combining engineering goals with explanatory ambition, this axis contributes both to more effective AI technologies and to a deeper understanding of artificial intelligence as a computational phenomenon.