|Lecture dates||July 20—24|
|Instructor||Roger Levy (email@example.com)|
|Instructor title||Assistant Professor, Department of Linguistics, University of California at San Diego|
Over the last two decades, computational linguistics has been revolutionized by increases in computing power, large linguistic datasets, and a paradigm shift toward the view that language processing by computers is best approached through the tools of statistical inference. During roughly the same time frame, there have been similar theoretical developments in cognitive psychology towards a view of major aspects of human cognition as instances of rational statistical inference, exemplified by work such as Anderson (1990) and Tenenbaum & Griffiths (2001). Developments in these two fields have set the stage for renewed interest in computational approaches to human language processing. Correspondingly, this course covers some of the most exciting developments in computational psycholinguistics over the past decade. The course focuses on probabilistic knowledge and memory in language processing, covering models, algorithms, and key empirical results in the literature.
Graduate students and researchers in linguistics, cognitive science, psychology, computer science, and any other discipline who are interested in using computational modeling techniques, especially probabilistic modeling, to study human language processing.
There is a mailing list for the class: firstname.lastname@example.org. Sign up for the mailing list here.
|Day||Topic||Slides||Core reading||Supplemental reading|
|20 July||Non-probabilistic, memory-focused models of incremental comprehension||Lecture 1||Yngve, 1960||Miller & Chomsky, 1963; Abney & Johnson, 1991; Gibson, 1998, 2000; Morrill, 2000; Lewis & Vasishth, 2005|
|21 July||Probabilistic grammars and human sentence comprehension as incremental probabilistic parsing||Lecture 2||Narayanan & Jurafsky, 1998||Jurafsky, 1996; Crocker & Brants, 2000; Narayanan & Jurafsky, 2002|
|22 July||Surprisal and approximate surprisal||Lecture 3: catchup and surprisal; inference over infinite tree sets||Hale, 2001; Levy, Reali, & Griffiths, 2009||Levy, 2008a; Smith & Levy, 2008; Demberg & Keller, 2008, Boston et al., 2008|
|23 July||Input uncertainty and noisy-channel Bayesian inference in word recognition & sentence comprehension||Lecture 4||Levy, 2008b||Norris, 2006|
|24 July||Optimality in sentence production; Uniform Information Density||Lecture 5||Levy & Jaeger, 2007||Genzel & Charniak, 2002, 2003; Aylett & Turk, 2004; Keller, 2004; Piantadosi, Tily, & Gibson, 2009|