Third Thursday: AI for Sound: A Future Technology for Smarter Living and Travelling

DATE: 20 June, 2019

LOCATION: Connected Places Catapult @ Urban Innovation Centre, 1 Sekforde Street, London, EC1R 0BE

START TIME: 6:30 pm END TIME: 8:30 pm

Imagine you are standing on a street corner in a city. Close your eyes: what do you hear?

The urban sonic experience is one that can be cacophonous; relentless and overwhelming, and layered with too much noise for any signals to reach us. For many in cities, sound is something that is shut off- through headphones, soundproofing, double glazed windows, and retreat into spaces where it can be more easily blocked and managed. Yet the mechanical force of sound carries with it much encoded information that AI systems are now starting to make sense of- triangulating their source points, classifying them as real or representational – and in doing so opening up a range of services and business opportunities where sound could be a trigger for response, or rich forecasting and foresighting tool. Through speech and image recognition, we are beginning to build computer systems to tackle this challenging task: to automatically recognise real-world sound scenes and events, and make use of them.

In this talk, we will explore some of the work going on in this rapidly expanding research area, and touch on some of the key issues for the future, including ensuring privacy around sound sensors. We will discuss some of the potential applications emerging for sound recognition, from home security and assisted living to industrial condition monitoring and assessing traffic noise. We will close with some pointers to more information about this exciting future technology.

Speaker: Mark Plumbley is Professor of Signal Processing at the Centre for Vision, Speech and Signal Processing (CVSSP) at the University of Surrey

Mark Plumbley is Professor of Signal Processing at the Centre for Vision, Speech and Signal Processing (CVSSP) at the University of Surrey, in Guildford, UK. He is an expert on analysis and processing of audio and music, using a wide range of signal processing and machine learning methods. He led the first international data challenge on Detection and Classification of Acoustic Scenes and Events (DCASE 2013), and hosted the DCASE 2018 Workshop in Woking, Surrey. He currently leads the EPSRC-funded project “Making Sense of Sounds” on automatic recognition of everyday sounds, and he is a co-editor of the recent book on “Computational Analysis of Sound Scenes and Events” (Springer, 2018).

Sign up to our newsletter