Analogy-making is a hallmark of human cognition. Abstract analogical reasoning, our ability to identify structural similarities between different situations, allows us to rely on past experience and previously gained knowledge to navigate the unfamiliar. A University of Michigan team, made up of researchers from CSE and the Department of Psychology, has combined the latest findings in natural language processing with established research in cognitive science to explore language models’ ability to mimic this component of human reasoning and form analogies.

The team’s paper, In-Context Analogical Reasoning with Pre-Trained Language Models, which will appear at the upcoming 2023 Annual Meeting of the Association for Computational Linguistics (ACL), bridges computer science and cognitive psychology, providing new insight into the capabilities of AI technologies as well as revealing the linguistic foundations of human reasoning. The study was authored by recent graduate Xiaoyang (Nick) Hu; CSE PhD student Shane Storks; John R. Anderson Collegiate Professor of Psychology, Linguistics, and Cognitive Science Richard Lewis; and Joyce Chai, professor of computer science and engineering and head of the Situated Language and Embodied Dialogue (SLED) lab. 

Read the complete article in Computer Science and Engineering