Computer processing of speech and language has advanced enormously in the last decade, with many people now using applications such as automatic translation, voice-activated search and even language-enabled personal assistants.

Yet these systems still lag far behind human capabilities, and the success they do have relies on machine-learning methods that learn from very large quantities of human-annotated data (for example, speech data with transcriptions or text labelled with syntactic parse trees).

These resource-intensive methods mean that effective technology is available for only a tiny fraction of the world's 5,000 or more languages, mainly those spoken in large rich countries.

The talk will argue that, in order to solve this problem, we need a better understanding of how humans learn and represent language in our minds, and we need to consider how human-like learning biases can be built into computational systems.

Dr Goldwater will illustrate these ideas using examples from her own research. She will discuss why language is such a difficult problem, what we know about human language learning, and show how her own work has taken inspiration from that to develop better methods for computational language learning.

Watch Dr Sharon Goldwater's Needham lecture