Hanbo Xie, Ph.D. student

Computational Cognitive Sciences

Georgia Institute of Technology

Hanbo Xie

About Me

I am a Ph.D. student in the Psychology program at the Georgia Institute of Technology, specializing in computational cognitive sciences. Prior to joining Georgia Tech, I earned an M.A. degree from the University of Arizona. I spent my entire master's and Ph.D. career under the supervision of Dr. Robert Wilson. Before that, I spent three years at Peking University as a full-time research assistant in CBCS. I earned my undergraduate degree in Human Resource Management from Southwestern University of Finance and Economics in China. Currently, I am visiting Tom Griffiths' CoCoSci Lab at Princeton University!

My research interests broadly lie at the intersection of cognitive sciences and artificial intelligence. My thesis focuses on using Large Language Models (LLMs) to understand how humans think in decision-making and learning using the think-aloud protocol. I employ a variety of computational tools and tasks to explore this exciting topic. Additionally, I am curious about how AI models (e.g., LLMs, agents) behave and think, as well as how human experiences and data can be leveraged to enhance AI models and research. This includes human-AI interactions, AI interpretability, and even artificial general intelligence (AGI)!

Beyond my research, I contribute to the broader research community. I co-founded MindRL-Hub, a community that facilitates the research and application of reinforcement learning in psychology and neuroscience. I am dedicated to promoting interdisciplinary research and collaboration, as well as fostering connections among early-career researchers.

For more details about my academic background and achievements, please view my Curriculum Vitae.

Research Highlights

Understanding Human Decision-Making and Learning from Think-Aloud Data with Large Language Models

The think-aloud protocol involves asking human participants to verbalize their thoughts while performing psychological tasks. Previous research primarily relied on behavioral measurements (typically button presses) to understand human cognitive processes. These mental models were typically proposed and tested by researchers themselves, which can introduce biases and limitations. By directly analyzing participants' thoughts, we gain a deeper probe into human cognition during tasks. However, past research on think-aloud data has heavily relied on subjective coding, where human experts manually label or interpret the content. This process is labor-intensive, subjective, and limited.

With the advent of LLMs, we see an opportunity to revisit this protocol with novel approaches. LLMs can be leveraged to quantify, interpret, and even predict subsequent behaviors based on think-aloud data. Our work explores the feasibility of using LLMs for this purpose, paving the way for a more scalable and systematic approach to analyzing human thought processes.

Reverse Engineering Human Thoughts

Human thoughts are difficult to define, capture, and model, yet they are fundamental to understanding human intelligence. Understanding thought processes across various tasks, identifying underlying principles, and generalizing these insights to predict thoughts are crucial in cognitive science and AI research. One of the major challenges is that thoughts are implicit, and language is highly diverse and nuanced. Thus, verbalized thoughts do not necessarily reflect the full spectrum of cognitive processing.

Rather than a forward approach (i.e., how human thoughts generate behaviors), this project focuses on inference: given observed behaviors and other measurable data, can we reconstruct and infer the underlying thoughts? Our goal is to establish a strong, generalized connection between behaviors and thoughts to enhance our understanding of human cognition. Furthermore, this project aims to advance machine 'thought' understanding in a human-centered manner. If the computations of complex models (such as AlphaGo) can be approximated using human-trained models, we can gain insights into how AI models think and compute using natural language explanations. This could revolutionize human-AI interaction, enabling AI to teach humans new concepts and strategies!

This project is currently in development and will be my primary research focus at Princeton University.

Improving Artificial Intelligence Through Human Insights

This project explores how AI can be understood through the lens of human psychology and neuroscience. By analyzing the strengths and weaknesses of AI compared to human intelligence, we can develop better models that are both more effective and more interpretable. Beyond solving complex problems, AI should have societal impact—improving human decision-making, education, and collaboration. Additionally, I believe humans can learn from advanced AI models if we develop appropriate frameworks to analyze and interpret their computations.

Accelerating Scientific Discoveries in Cognitive Science

The human mind is fascinating and complex. While we experience thoughts, emotions, and actions daily, formally describing, interpreting, and predicting human cognition remains a significant challenge for cognitive scientists and psychologists. Many cognitive theories are inspired by human intuition and validated through experiments and computational models. However, traditional validation methods may not always extend beyond the predefined hypothesis space.

In the era of AI, transforming research paradigms in cognitive science and psychology is crucial. LLMs possess extensive knowledge and inductive biases, potentially surpassing individual human expertise. State-of-the-art language-based reasoning models exhibit strong reasoning abilities—perhaps even beyond those of human scientists. Can we develop AI-driven pipelines to discover new phenomena, construct computational models, and generate scientific theories with minimal human bias? Investigating these possibilities alongside empirical research will be both exciting and transformative!

Publications

* Denotes equal contribution, † Denotes Correspondence, Underscored names denote mentee

2025
Qiu, S., Tang, Y., Yu, H., Xie, H., Dreher, J. C., Hu, Y., & Zhou, X. (2025). Toward a computational understanding of bribe‐taking behavior. Annals of the New York Academy of Sciences.
Pan, L.*, Xie, H.*†, & Wilson, R. C. (2025). Large Language Models Think Too Fast To Explore Effectively. arXiv preprint arXiv:2501.18009 (submitted).
Zhang, Z.*, Xie, H.*, Baker, T., Peters, M., & Wilson, R. C. (2025). Linking strategies to think aloud in a stochastic learning task. (submitted)
2024
Fang, Z., Zhao, M., Xu, T., Li, Y., Xie, H., Quan, P., ... & Zhang, R. Y. (2024). Individuals with anxiety and depression use atypical decision strategies in an uncertain world. eLife, 13.
Xie, H., Xiong, H., & Wilson, R. C. (2024) From Strategic Narratives to Code-Like Cognitive Models: An LLM-Based Approach in A Sorting Task. First Conference on Language Modeling (COLM).
Xie, H., Xiong, H., & Wilson, R. C. (2024) Evaluating Predictive Performance and Learning Efficiency of Large Language Models with Think Aloud in Risky Decision Making. Computational Cognitive Neuroscience (CCN), MIT.
2023
Xie, H. (2023). The promising future of cognitive science and artificial intelligence. Nat Rev Psychology.
Xie, H., Xiong, H., & Wilson, R. C. (2023). Text2Decision: Decoding Latent Variables in Risky Decision Making from Think Aloud Text. NeurIPS 2023 AI for Science Workshop.
Xie, H., Xiong, H., & Wilson, R. C. (2023). Computational introspection: Can large language models reveal cognitive algorithms from human language? Poster session presented at the 5th Chinese Computational and Cognitive Neuroscience Conference, Beijing, China.
2022
Guo, Y., Song, S., Xie, H., Gao, X., & Zhang, J. (2022, February). ARIMA and RNN for Selection Sequences Prediction in Iowa Gambling Task. In 2022 2nd International Conference on Artificial Intelligence and Signal Processing (AISP) (pp. 1-6). IEEE.
2020
Song, S.*, Xie, H.*, Speekenbrink, M., Zhang, J., Gao, X., & Zhou, X. (2020, October). The computational basis of individuals' learning under uncertainty in groups with collective goals. Oral presentation at the Society for Neuroeconomics, Vancouver, Canada.

Collaborators

Mentor and Committee

Social Cognition

Think Aloud

Large Language Models and Neural Networks

Mentees

  • Zhenlong Zhang, Johns Hopkins University
  • Lan Pan
  • Yangtong Feng, Wash U St. Louis

Contact

Email: hanboxie1997@gatech.edu

Address: 750 Ferst Drive, Atlanta, GA 30332

Twitter GitHub Google Scholar