In This Issue
Summer Bridge on Noise Control Engineering
June 15, 2021 Volume 51 Issue 2
What is the role of engineering practice, education, and standards in mitigating human-generated noise? The articles in this issue survey these aspects of the US noise landscape, and offer updates and useful resources.

Acoustic Source Localization Techniques and Their Applications

Monday, June 14, 2021

Author: Yangfan Liu, J. Stuart Bolton, and Patricia Davies

Advances are needed to enhance the accuracy and application of acoustic source localization techniques.

Acoustic source localization technology is used to determine the location(s) of a sound source or multiple sources in an environment by processing acoustic signals measured at a number of locations.[1] Many techniques are capable of not only localizing but also estimating sound source types and strengths, information that can then be used to predict the sound field everywhere in the environment. With this information, for example, it is possible to generate a visual representation of the sound sources and sound field distribution in a region of interest and virtually recreate the sound that would be heard at any location in the environment.

In this article, we discuss techniques of acoustic source localization, challenges, and potential applications of both the source and sound field information.

Introduction

The very first application of acoustical source localization appeared more than 100 years ago, with the invention of “acoustic telescopes” to detect ships in foggy weather. An example of early sound source localization devices is shown in figure 1. The idea is that if the device is facing in the direction of a sound source, sounds collected at receivers arrive simultaneously and so reinforce each other when the mixed sound is heard.

Liu et. al figure 1Although more sophisticated localization techniques are available today, the phase reinforcement principle still commonly underlies modern localization devices. The need to mechanically rotate the device to search for directions of phase reinforcement led to signal processing of the sound measured at fixed microphone locations (figure 2) to recreate the effect of rotation without the need to actually move the array.

Acoustic source localization has been used not only for ship and vehicle detection but also location of dominant noise sources in machines (e.g., engines, vehicles, airplanes) to guide product noise control, target selection and interference rejection for communication devices or speech recognition processing, and condition monitoring for mechanical systems.

Liu et. al figure 2Furthermore, with the ability to estimate source strengths and the resulting sound field, localization approaches have been widely implemented in the acoustical design of audio devices and theater systems, noncontact measurements of vibration, and audio virtual reality systems, among other uses. With recent advances in machine learning, cloud computing, and on-chip electronics, acoustic source localization techniques are finding their future in an ever wider range of applications.

Acoustic Source Localization Techniques

There are two main types of acoustical visualization techniques: beamforming and holography. Generally speaking, beamforming focuses on finding source locations and holography on predicting sound fields.

In beamforming, sources are found by scanning all potential source locations and then determining how likely an actual source is at each potential location based on the strength of the combined microphones’ output signals.

In acoustical holography, sources are assumed to exist at all potential locations at the same time; the strength of each potential source is estimated by finding the best match to measurements. Once the source strengths are calculated, the sound field can be predicted by solving a straightforward sound propagation problem.

Liu et. al figure 3Examples of source localization and sound field visuali­zation results from beamforming and acoustical holography are shown in figures 3 and 4. In figure 3, the red indicates locations where sound is generated by flow ­turbulence and interaction of the flow with the solid surface of an airfoil. Figure 4(a) shows sound radiating away from a refrigeration compressor; the red and blue designate the locations of positive and negative sound pressures (respectively) at one instant of time. Figure 4(b), showing the surface motion of the sidewall of a rotating tire at two different frequencies, clearly indicates the standing wave patterns that form in a rolling tire.

Liu et. al figure 4Acoustical Beamforming

The earliest microphone array beamforming technique, dating from the 1940s (e.g., Dolph 1946), is called delay-and-sum processing, which implements a principle similar to the mechanical localization devices mentioned above.

After assuming a “look” direction, the relative time differences due to sound propagation between each microphone signal and the signal at a chosen reference microphone can be determined. The likelihood of an actual source in the assumed look direction is cal­culated by delaying (or compensating for) the relative time ­differences at all microphones on the array, and then summing them up.

If the look direction coincides with the actual source direction, signals at all microphones will be in phase with the reference microphone signal, and thus reinforce and result in a strong total signal. When the look direction differs from the actual source direction, the signals will not be in phase, and the mutual cancellation will produce a weak output signal (Johnson and Dudgeon 1992).

Many more advanced beamforming techniques have since been developed. They mainly differ in the choice of the performance criteria to be maximized or minimized (Chiariotti et al. 2019). Mathematically, the traditional delay-and-sum technique minimizes the source strength prediction error in the assumed look direction (or location) and thus ensures that the beamforming produces the best prediction when the look direction is “correct.”

A variant of the beamforming technique, the minimum variance distortionless response method, imposes a constraint that requires it to output the exact (or distortion­less) source strength when the look location is “correct.” At the same time, it minimizes the total output signal power, so the response when the look direction is “wrong” is minimized.

Another variant, the linear constraint, minimum variance beamforming method, places further constraints on the formulation: it guarantees, for example, zero response to sources at known locations or sound from known reflecting surfaces, which can further remove interference caused by previously known sources.

Acoustical Holography

Acoustical holography methods usually aim to predict the sound field at an arbitrary location based on acoustical measurements at a number of locations (i.e., at the array microphones), rather than identifying only a source direction. Almost all holography techniques rely on the use of a number of basic virtual sources to represent any possible sound field in the prediction region. The undetermined source strengths and other parameters are estimated by minimizing the dis­crepancy between the predicted sound field and the measured sound field at the microphone locations.

Acoustical holography methods differ in the choice of virtual sources used to represent the sound field and the methods used to estimate the source parameters.

Spatial Fourier Transform, Least Squares,
and Spherical Waves

The first method was based on a sound field representation by plane waves, which are the general solutions of the wave equation in Cartesian coordinates. The calculation of source strengths in the earliest holography methods used a discrete spatial Fourier transform of data measured by a rectangular array with equally spaced microphones. This method was later extended to cylindrical and spherical coordinates (Williams 1999).

Since a Fourier transform is mathematically equivalent to minimizing the squared error between the predicted and measured sound pressures at the measurement locations, the source strengths in the Fourier-based holography methods can be calculated by a direct use of least squares optimization.

The least squares approach has been widely used in another category of holography methods, the equivalent source method (Ochmann 1995), which is now the most commonly used holography technique.

The beamforming method can remove
interference caused by known sources.

The first equivalent source method was developed based on a representation using a distribution of monopoles (i.e., point sources) on an imaginary surface ­located just behind the actual source surface. The monopole distribution representation is based on the theory of single-layer potential representations of the wave equation. That representation can be ­extended to include dipole distribution representations (i.e., a double-layer potential) as well as methods employing a combination of monopole and dipole distributions. The ­latter approach, referred to as inverse boundary element ­holography, is derived from the boundary integral form of the wave equation.

It is also possible to use spherical waves to construct orthogonal velocity distributions on the actual source surface and use these distributions as equivalent sources in holography.

In addition to these various field representations derived from the mathematical properties of the governing equation, there are equivalent source methods involving representation bases with stronger physical meanings, such as multipole series, structural vibration modes, and acoustic radiation modes. Holography methods that employ bases with clearer physical meanings usually yield better modeling efficiency since the actual source generation mechanism is closely represented by the assumed basis functions.

Most source localization methods have been developed in
anechoic environments,
which are not likely to be
the case in practice.

Limitations

Some source information either cannot be detected by measurements or is very sensitive to measurement noise (Nelson and Yoon 2000). Problems in estimating source information from measurements in acoustical holography may result from one or more of the following three possibilities:

  • The number of sources that needs to be included in the model is larger than the number of microphones available (e.g., a model with monopoles distributed over a large source surface).
  • The resulting sound field decays very quickly when propagating away from the source, so that when the sound pressure due to these source components arrives at the array, it is close to or even smaller than the measurement noise.
  • There are too many closely located (i.e., oversampled) measurements (several measurement locations capture the same information), leading mathematically to an ill-conditioned problem in source ­estimation.

Challenges of Existing Methods

In every step of each of the many types of acoustic localization or visualization techniques, there are many choices of parameters and calculation procedures. A different choice or a modification in any of them can result in differences in the localization and visualization performance, and the performance differences among various methods can be quite large.

Lack of Guidelines

One significant challenge in practical applications is that, at present, nothing is clearly better than “try it and see” when judging which methods are most suitable for a specific engineering application. Researchers and engineers choose method types and select and modify various model components according to their experience and their understanding of the source mechanism.

Although there are some general guidelines such as choosing a source basis similar to the actual source both in terms of physical mechanism and geometrical dimensions, they are far from systematic. Also, most source localization methods have been developed for sources in anechoic (i.e., echo-free) environments, which are not likely to be the case in practice. Thus, another application difficulty is the removal of the influence of reflections and scattering from the actual measurement environment.

Stationarity vs. Moving or Transient Sources

Most acoustic localization methods assume ­stationarity, meaning that the sound field characteristics do not change over time, a requirement that follows from their frequency domain formulation. This assumption prevents localization techniques from being implemented for either transient sources (e.g., explosions) or moving sources.

Some holographic procedures are based on time-domain convolution. But time-domain source localization is an area that requires further research to find routine application in practice.

When considering the localization of moving sources (e.g., high-speed trains), visualization when the motion is known is much easier than when it is unknown. When the source trajectory is given, there are several options: remove the Doppler effect on the stationary array measurements, incorporate the sound field expression of moving sources in the holography or beamforming models, or virtually construct measurement signals that would be obtained if the array had the same motion as the source. However, localization of sources with unknown motions without additional motion capture tools is still an open issue where few practical solutions are available.

Hardware: Cost, Interference

To ensure good localization performance, most ­methods require many microphones—perhaps hundreds—and the cost of this measurement hardware limits the application of these techniques.

In addition, the presence of the microphone array and the related mounting systems and wiring can lead to nonnegligible acoustic scattering that contaminates the microphone measurements and thus impairs the source localization performance. Any object, even a relatively small microphone placed in a sound field, perturbs the sound field that is to be measured, adversely affecting the accuracy of the source visualization. That problem is made worse by the presence of both the support structure that holds the microphones in position and the cabling that connects dozens or hundreds of microphones to the data acquisition system. So there is an incentive to make sound pressure measurement ­methods as noninvasive as possible.

At present, MEMS (microelectromechanical systems) microphones are being widely adopted to minimize scattering effects. These microphones-on-a-chip can be very small compared to conventional condenser-type microphones. Further, their power requirements are very modest. It may be that the vibration induced by the sound pressure itself could power the microphones, eliminating the need for external power supplies. Transmission of the signal to the data acquisition system could conceivably be done wirelessly, completely eliminating the need for cables.

These steps would represent a very positive improvement. But in the end, it would be desirable to eliminate the microphones entirely in favor of a completely noninvasive measurement procedure.

Future Applications and Potential Connection with Other Techniques

Fault Diagnostics

One application of acoustic localization or visualization techniques that is likely to attract wide attention in both industry and academia is implementation in online condition monitoring and fault diagnostics of mechanical systems, such as wind turbines and various mechanical subsystems in vehicles.

A reliable online diagnostics system requires “clean” data to be obtained from each potentially faulty component without interrupting the machine operation. But it is often difficult or impossible to apply sensors directly to components in every product. It is preferable to use acoustical data measured at microphones a certain distance from the targeted mechanical component and then use source localization techniques as a virtual sensing tool to measure vibroacoustic information from each component. Acoustic measurements are usually more cost effective than alternatives such as optical measurements.

The primary stimulus of the study of sound source localization in fault diagnostics is the emergence of machine learning (ML) research and applications. There could be many valuable outcomes from investigating which ML algorithms are more applicable in fault diagnostics, what signal characteristics are more suitable for training ML models, and how to develop source localization techniques to better obtain those signal characteristics. The development of autonomous vehicles, for example, raises the demand for improved fault diagnostic tools for a variety of mechanical systems to ensure safety and driving comfort.

Autonomous vehicles raise the demand for improved fault diagnostic tools for mechanical systems to ensure safety and driving comfort.

Virtual Noise Control Design and Active Noise Control

An attractive aspect of the holographic procedure is its application in virtual noise control design. Since the holographic procedure involves the creation of a source plane, it supports, for example, the creation of a virtual stethoscope: it is possible to synthesize, and thus listen to, sound generated at any point on the source surface. The synthesized signals can be analyzed, using sound quality software, to identify regions from which objectionable sounds are radiating.

Further, the sound radiation from particular parts of a complex source can be virtually “turned off” to gauge the noise control impact from either reducing the sound generated by a particular component or applying a shield. In this way, it would be possible to both quantify the potential reduction in radiated sound power and assess the impact of changes on the perceived sound quality. Engineers would be able to accurately assess the relative benefit of different noise control solutions, thus potentially eliminating expensive prototyping stages and significantly speeding up the design and development process.

Source visualization techniques can also provide benefits to active noise control: i.e., the use of secondary noise sources to generate sound fields that cancel unwanted sounds. Currently, the increasing on-chip computing power and reductions in electronic hardware cost make it desirable to perform multichannel active control over a relatively large spatial region. However, an active control algorithm can sense the noise control performance as measured by error microphones only at ­specific locations in the target region; good noise control performance at those microphone locations does not guarantee good performance at other locations in the region.

If techniques such as acoustical holography can be combined with active noise control systems, it will be possible to provide virtual measurements of the sound field throughout the target region, and these measurements can be used in the active control algorithm. In principle, the performance of such an active noise control system will give much better global performance than traditional methods.

Looking Ahead

In recent years, it has been shown that the small fluctuations in air density that result from the passage of sound waves can be detected with laser systems (Sonoda and Nakamiya 2014). This observation offers a very promising path forward. At present, this procedure cannot be used to make a precisely localized measurement, but perhaps through the use of multiple laser beams interacting at a point, the density and sound pressure fluctuation could be detected at an array of points, whether sequentially, by scanning, or simultaneously with a sufficient number of laser channels. And further in the future, it may be possible to measure sound pressure over an entire planar surface at once by using pulsed laser sheets (by analogy to double-pulse particle-image velocimetry).

Although these suggestions may seem like science fiction, we are sure that ultimately it will be possible to make noninvasive sound pressure measurements over large surfaces, thus providing very finely resolved data as input to holographic and beamforming procedures.

References

Chiariotti P, Martarelli M, Castellini P. 2019. Acoustic beamforming for noise source localization: Reviews, methodology and applications. Mechanical Systems and Signal Processing 120:422–48.

Dolph CL. 1946. A current distribution for broadside arrays which optimizes the relationship between beam width and side-lobe level. Proceedings of the IRE 34(6):335–48.

Geyer T, Sarradj E, Giesler J. 2012. Application of a beamforming technique to the measurement of airfoil leading edge noise. Advances in Acoustics and Vibration 2012:905461.

Johnson DH, Dudgeon DE. 1992. Array Signal Processing: Concepts and Techniques. New York: Simon & Schuster.

Nelson PA, Yoon SH. 2000. Estimation of acoustic source strength by inverse methods, part I: Conditioning of the inverse problem. Journal of Sound and Vibration 233(4):639–64.

Ochmann M. 1995. The source simulation technique for acoustic radiation problems. Acta Acustica united with Acustica 81(6):512–27.

Sonoda Y, Nakamiya T. 2014. Direct detection of sound wave by light. IEEE 3rd Global Conf on Consumer ­Electronics, Oct 7–10, Tokyo.

Williams EG. 1999. Fourier Acoustics: Sound Radiation and Nearfield Acoustical Holography. London: Academic Press.

 


[1]  Depending on specific applications, methods may be denoted with different terms (e.g., sound source visualization, sound field reconstruction or visualization) and acoustic array techniques.

About the Author:Yangfan Liu is an assistant professor and Stuart Bolton a professor in the Department of Mechanical Engineering, where Patricia Davies is also a professor as well as former ­director of Ray W. Herrick Laboratories, all at Purdue University.