# research

(dedicated research pages with more details coming up!)

## audio and acoustics

Sound can be studied from a number of different viewpoints. A major one is its perceptual quality—sound as someone hears it. Another one is its capacity to carry geometric information. My research addresses both of these aspects, but I prefer to dwell in the brackish water between the two. This territory is related to what I call the *acoustic geometry*—description of an acoustic scene optimized for a certain processing scenario (dereverberation, separation, localization, …).

The reason I say *optimized* is to contrast it with detailed 3D descriptions of the computer vision sort. These are far too complex and unwieldy for problems in audio and acoustics, and they often don’t provide the right kind of information. Most importantly, estimating them with a couple of microphones and sources seems hopeless, but estimating acoustic geometry is, by definition, within reach (though by no means simple). To be more concrete, acoustic geometry may involve positions and properties of walls, microphones and sources, together with associated directivities and frequency responses. The representation and its complexity vary as a function of the application at hand.

Much of what I discovered so far revolves around echoes. I draw considerable pleasure from upending traditional wisdom that echoes are atrocious evildoers and that we should {*cancel*, *suppress*, *obliterate*} them, and prefer to use them to solve the very problems whose solution they’re supposed to impede.

You may want to read about how to (algorithmically) hear the shape of a room or, if that’s your thing, learn about raking cocktail parties. Most of my upcoming research efforts in audio and acoustics will revolve around identifying the right description of acoustic geometry in a slew of scenarios (think hearing aids, Echo, …) and finding ways to estimate it.

## distance geometry

Working with echoes is an exercise in distance geometry. We try to divine something about the acoustic setup from a set of echo arrival times which correspond to some distances.

A peculiar problem that arises in this context is that of labeling—with multiple microphones around the room, it is not straightforward to know which echoes come from the same wall.

The general version of this problem is as follows: you’re supposed to reconstruct the geometry of a point set from a few pairwise distances between the points, alas, you lost track of which distance belongs to which pair of points. In effect, you have a pile of sticks of different lengths and you want to assemble them into a rigid structure such that the ends of all sticks are connected. This is the so-called *unassigned* (unlabeled, unsorted) distance geometry problem. By a serendipitous fluke, the same problem turns up in determining crystal structures!

Sooner or later when studying distances, objects known as Euclidean distance matrices (EDMs) emerge as a key tool. This was certainly the case in a great deal of problems we looked at. In fact, it motivated us to write a tutorial paper on EDMs.

I apply EDMs to various localization and mapping tasks, often in combination with echoes. Imagine for example that you want to run audio experiments in a room with many microphones, and your experiments require you to know the geometry of the microphone array. Not the worst way of procuring this information is to simply place the microphones where you want them, snap your fingers, and let the echoes do the dirty localization work (you might need to read this or this).

Another scenario where distance geometry plays a role could be that you’re walking around the room with a device equipped with a microphone and a source, and you want to reconstruct your trajectory from echoes. But wait, that’s exactly how omnidirectional bats navigate!

## new approaches to ill-posed inverse problems

While much of what is described above can be characterized as inverse problems, here I refer to a very classical definition appearing in geophysical or biomedical imaging. I recently started working on a new approach to stabilizing such ill-posed inverse problems with Joan Bruna, Stéphane Mallat, and Maarten de Hoop. Our approach is based on non-linear transforms, linear estimation, and learning.

We address the usual problem formulation: solve the following for :

\begin{equation} \label{eq:measurements} \tag{1} y = \Gamma x + b, \end{equation}

where is an operator (not necessarily linear) between some vector spaces and is noise.

If is singular or doesn’t have a stable inverse, then the problem is ill-posed. The usual way to address ill-posedness is to search for a solution which minimizes a regularized cost functional

\begin{equation} \label{eq:minimization} \tag{2} \widehat{x} = \mathop{\mathrm{arg~min}}_{u \in {\cal X}} \ \tfrac{1}{2} \Vert y - \Gamma u \Vert^2 + \lambda h(u), \end{equation} with typically convex.

For many , solving the inverse problem can be rephrased as recovering missing spectrum from known spectrum, and the role of the regularizer is to stabilize this inversion (e.g. stabilize the high frequencies). In contrast to \eqref{eq:minimization}, we achieve stabilization not by regularizing a loss term—at least not in the usual sense where it is determined by a single realization. We rather base our approach on iterative linear estimation in the space of some non-linear feature transform . A particularly good choice of for many inverse problems turns out to be the scattering transform—a peculiar complex-valued convolutional network.