Paper Recap: Mapping Language Models To Grounded Conceptual Spaces

This is a presentation I made and presented at an MIT class on large language models by Professor Yoon Kim. The presentation is a recap of the paper "Mapping Language Models to Grounded Conceptual Spaces" which addresses a critical limitation of text-only language models: their lack of grounding, or the ability to connect linguistic representations with real-world referents. Despite this challenge, the paper demonstrates that these models exhibit a robust conceptual understanding, enabling inference and fluent text generation. Our presentation summarizes the key findings of the paper and then introduces original experiments that explore this alignment further. Our results reveal both supporting evidence that confirms the paper’s findings and contrasting outcomes that highlight areas of disagreement.