Welcome: Why a Research Blog?
I will use this space to share in-progress thoughts on cognitive science, LLM evaluation, and scientific workflows. Posts can be long-form essays, short project updates, or reading notes.
Read postComputational Cognitive Science
Georgia Institute of Technology
Cognitive Modeling Large Language Models Human-centered AI
Computational approaches to understanding human cognition and improving AI systems.
I am a Ph.D. student in the Psychology program at the Georgia Institute of Technology, specializing in computational cognitive sciences. Prior to joining Georgia Tech, I earned an M.A. degree from the University of Arizona. I spent my entire master's and Ph.D. career under the supervision of Dr. Robert Wilson. Before that, I spent three years at Peking University as a full-time research assistant in CBCS. I earned my undergraduate degree in Human Resource Management from Southwestern University of Finance and Economics in China. Currently, I am visiting Tom Griffiths' CoCoSci Lab at Princeton University.
My research interests broadly lie at the intersection of cognitive sciences and artificial intelligence. My thesis focuses on using Large Language Models (LLMs) to understand how humans think in decision-making and learning using the think-aloud protocol. I employ a variety of computational tools and tasks to explore this exciting topic. Additionally, I am curious about how AI models (e.g., LLMs, agents) behave and think, as well as how human experiences and data can be leveraged to enhance AI models and research. This includes human-AI interactions, AI interpretability, and even artificial general intelligence (AGI).
Beyond my research, I contribute to the broader research community. I co-founded MindRL-Hub, a community that facilitates the research and application of reinforcement learning in psychology and neuroscience. I am dedicated to promoting interdisciplinary research and collaboration, as well as fostering connections among early-career researchers.
Selected themes and current directions.
The think-aloud protocol asks participants to verbalize their thoughts while they perform psychological tasks. Traditional work has mostly relied on behavioral outputs (often button presses) to infer latent cognitive processes. In many cases, candidate cognitive models are proposed and tested by researchers, which can limit the hypothesis space and introduce bias. By directly analyzing participants' verbal reports, we gain a richer and more direct view of cognition during task performance. However, most prior think-aloud research depends on manual coding by experts, which is labor-intensive, subjective, and difficult to scale.
Recent advances in LLMs make it possible to revisit this classic protocol with stronger computational tools. LLMs can help quantify, interpret, and even predict subsequent behavior from think-aloud language. Our work evaluates when and how these models can be used reliably, with the goal of building a more systematic and scalable framework for studying human thought processes.
Representative publications:
Human thought is central to intelligence, yet it is difficult to define, measure, and model. A core challenge in both cognitive science and AI is to characterize thought processes across tasks, identify shared principles, and generalize those principles to make useful predictions. The difficulty is that thoughts are often implicit, while language is diverse and context-dependent. As a result, verbal reports are informative but still incomplete reflections of internal cognition.
Instead of focusing only on the forward direction (how thoughts generate behavior), this project emphasizes inverse inference: given observed behavior and related measurements, can we reconstruct plausible underlying thoughts? The broader goal is to build a stronger, more general bridge between behavior and cognition. This direction also supports a human-centered understanding of machine reasoning. If the computations of complex systems (e.g., AlphaGo-like models) can be approximated by human-trained explanatory models, we may be able to describe model reasoning in natural language that is useful for teaching, interpretation, and collaboration.
This project is currently in development and will be my primary research focus at Princeton University.
Representative publications:
This project examines AI through concepts from psychology and neuroscience. By comparing strengths and weaknesses of AI and human intelligence, we can design models that are both more capable and more interpretable. Beyond technical performance, I am interested in societal value: systems that support human decision-making, education, and collaboration. I also explore how people can learn from advanced AI models when we build the right frameworks to analyze and communicate their internal computations.
Representative publications:
The human mind is deeply complex. Although thoughts, emotions, and actions are part of everyday experience, formally describing and predicting cognition remains a major scientific challenge. Many cognitive theories are grounded in human intuition and then tested through experiments and computational models. These approaches are powerful, but they can remain constrained by the original hypothesis space.
In the AI era, there is an opportunity to rethink discovery pipelines in cognitive science and psychology. LLMs bring broad knowledge and strong inductive biases, and modern reasoning models can perform at levels that sometimes rival expert intuition. A central question for my work is whether we can build AI-assisted workflows that help discover new behavioral phenomena, generate computational models, and propose testable theories while reducing avoidable human bias. I view this as complementary to, not a replacement for, careful empirical research.
Representative publications:
* Denotes equal contribution, † Denotes Correspondence, Underscore denotes mentee. Use topic filters to navigate.
A place for research notes, project updates, and ideas in progress.
I will use this space to share in-progress thoughts on cognitive science, LLM evaluation, and scientific workflows. Posts can be long-form essays, short project updates, or reading notes.
Read postEach blog post has its own standalone page with its own comments thread (Utterances + GitHub Issues), which keeps the main blog index lightweight and easy to browse.
If you want, I can switch from Utterances to Giscus and include emoji reactions/likes directly.
Email: hanboxie1997@gatech.edu
Address: 750 Ferst Drive, Atlanta, GA 30332