Single-Microphone Spatial Audio Analysis

Image

©Face

Acoustics
January 2019 - January 2023

The objective of this Ph.D. project is to challenge the widely accepted premise that single-microphone audio analysis cannot rely on spatial or wavenumber domain signal representations. It aims to show that single-microphone spatial audio analysis is possible under the assumptions that (1) the microphone is directional, i.e., spatially selective, (2) the microphone is allowed to move, and (3) a second sensor in another modality (e.g. accelerometer, camera) is available to infer the microphone orientation. A novel signal model for a single moving microphone deployed in a multi-source reverberant acoustic environment is introduced, based on which novel single-microphone signal processing algorithms for spatial audio analysis will be designed. The behaviour of these algorithms will be thoroughly analyzed based on a theoretical analysis of the assumptions made throughout the algorithm design and an experimental validation in simulated as well as laboratory environments. The validation will be focused on smartphone and hearing aid use cases, which will be explored in an active collaboration with two companies during the implementation of the project.