About Me
I am a Ph.D. student in the Psychology program at the Georgia Institute of Technology, specializing in computational cognitive sciences. Prior to joining Georgia Tech, I earned an M.A. degree from the University of Arizona. I spent my entire master's and Ph.D. career under the supervision of Dr. Robert Wilson. Before that, I spent three years at Peking University as a full-time research assistant in CBCS. I earned my undergraduate degree in Human Resource Management from Southwestern University of Finance and Economics in China. Currently, I am visiting Tom Griffiths' CoCoSci Lab at Princeton University!
My research interests broadly lie at the intersection of cognitive sciences and artificial intelligence. My thesis focuses on using Large Language Models (LLMs) to understand how humans think in decision-making and learning using the think-aloud protocol. I employ a variety of computational tools and tasks to explore this exciting topic. Additionally, I am curious about how AI models (e.g., LLMs, agents) behave and think, as well as how human experiences and data can be leveraged to enhance AI models and research. This includes human-AI interactions, AI interpretability, and even artificial general intelligence (AGI)!
Beyond my research, I contribute to the broader research community. I co-founded MindRL-Hub, a community that facilitates the research and application of reinforcement learning in psychology and neuroscience. I am dedicated to promoting interdisciplinary research and collaboration, as well as fostering connections among early-career researchers.
For more details about my academic background and achievements, please view my Curriculum Vitae.
Research Highlights
Understanding Human Decision-Making and Learning from Think-Aloud Data with Large Language Models
The think-aloud protocol involves asking human participants to verbalize their thoughts while performing psychological tasks. Previous research primarily relied on behavioral measurements (typically button presses) to understand human cognitive processes. These mental models were typically proposed and tested by researchers themselves, which can introduce biases and limitations. By directly analyzing participants' thoughts, we gain a deeper probe into human cognition during tasks. However, past research on think-aloud data has heavily relied on subjective coding, where human experts manually label or interpret the content. This process is labor-intensive, subjective, and limited.
With the advent of LLMs, we see an opportunity to revisit this protocol with novel approaches. LLMs can be leveraged to quantify, interpret, and even predict subsequent behaviors based on think-aloud data. Our work explores the feasibility of using LLMs for this purpose, paving the way for a more scalable and systematic approach to analyzing human thought processes.
Reverse Engineering Human Thoughts
Human thoughts are difficult to define, capture, and model, yet they are fundamental to understanding human intelligence. Understanding thought processes across various tasks, identifying underlying principles, and generalizing these insights to predict thoughts are crucial in cognitive science and AI research. One of the major challenges is that thoughts are implicit, and language is highly diverse and nuanced. Thus, verbalized thoughts do not necessarily reflect the full spectrum of cognitive processing.
Rather than a forward approach (i.e., how human thoughts generate behaviors), this project focuses on inference: given observed behaviors and other measurable data, can we reconstruct and infer the underlying thoughts? Our goal is to establish a strong, generalized connection between behaviors and thoughts to enhance our understanding of human cognition. Furthermore, this project aims to advance machine 'thought' understanding in a human-centered manner. If the computations of complex models (such as AlphaGo) can be approximated using human-trained models, we can gain insights into how AI models think and compute using natural language explanations. This could revolutionize human-AI interaction, enabling AI to teach humans new concepts and strategies!
This project is currently in development and will be my primary research focus at Princeton University.
Improving Artificial Intelligence Through Human Insights
This project explores how AI can be understood through the lens of human psychology and neuroscience. By analyzing the strengths and weaknesses of AI compared to human intelligence, we can develop better models that are both more effective and more interpretable. Beyond solving complex problems, AI should have societal impact—improving human decision-making, education, and collaboration. Additionally, I believe humans can learn from advanced AI models if we develop appropriate frameworks to analyze and interpret their computations.
Accelerating Scientific Discoveries in Cognitive Science
The human mind is fascinating and complex. While we experience thoughts, emotions, and actions daily, formally describing, interpreting, and predicting human cognition remains a significant challenge for cognitive scientists and psychologists. Many cognitive theories are inspired by human intuition and validated through experiments and computational models. However, traditional validation methods may not always extend beyond the predefined hypothesis space.
In the era of AI, transforming research paradigms in cognitive science and psychology is crucial. LLMs possess extensive knowledge and inductive biases, potentially surpassing individual human expertise. State-of-the-art language-based reasoning models exhibit strong reasoning abilities—perhaps even beyond those of human scientists. Can we develop AI-driven pipelines to discover new phenomena, construct computational models, and generate scientific theories with minimal human bias? Investigating these possibilities alongside empirical research will be both exciting and transformative!
Publications
* Denotes equal contribution, † Denotes Correspondence, Underscored names denote mentee
Collaborators
Mentor and Committee
- Dr. Robert C. Wilson (Supervisor), Georgia Tech
- Dr. Thomas Griffiths, Princeton University
- Dr. Anna Ivanova, Gerogia Tech
- Dr. Varma Sashank, Gerogia Tech
Social Cognition
- Prof. Xiaolin Zhou, East China Normal University
- Dr. Sensen Song, Central China Normal University
- Dr. Xiaoxue Gao, East China Normal University
- Dr. Yang Hu, East China Normal University
- Dr. Maarten Speekenbrink, UCL
Think Aloud
- Dr. Travis Baker, Rutgers University
- Dr. Megan Peters, UC Irvine
- Dr. Evan Russek, Princeton University
Large Language Models and Neural Networks
- Hua-Dong Xiong, Georgia Tech
- Dr. Jian-Qiao Zhu, Princeton University
Mentees
- Zhenlong Zhang, Johns Hopkins University
- Lan Pan
- Yangtong Feng, Wash U St. Louis
Contact
Email: hanboxie1997@gatech.edu
Address: 750 Ferst Drive, Atlanta, GA 30332