Efficient compression and linguistic meaning in humans and machines
Dr. Noga Zaslavsky, MIT
What computational principles govern the ability to communicate about a complex world, while operating with bounded resources? By integrating tools from information theory, machine learning, and cognitive science, I will argue that in order to achieve this ability, both humans and machines must efficiently compress their representation of the world. In support of this claim, I will present a series of studies showing that: (i) languages evolve under pressure to efficiently compress meanings into words; (ii) the same principle can give rise to human-like semantic representations in artificial neural networks trained for vision; and (iii) efficient compression may also explain how meaning is constructed in real time as interlocutors reason pragmatically about each other’s intentions and beliefs. Taken together, these results suggest that efficient compression underlies how humans communicate and reason about meaning, and may guide the development of artificial agents that can communicate and collaborate with humans.