Open an interactive online version by clicking the badge Binder badge or download the notebook.

Pyfar Introduction#

The Python packages for Acoustics Research (pyfar) aim at providing methods and functionality for acoustics research in a unified and comprehensive framework. Pyfar is developed with a strong focus on:

  • Online documentation.

  • Automated testing.

  • Unrestricted use through the liberal MIT or CC-BY open source licenses.

The pyfar base package contains classes and functions for the acquisition, inspection, and processing of audio signals. For ease of maintainability and robustness to future changes, applied or specific features are split into sub-packages which extend the base package. These currently include:

  • sofar: SOFA file reader and writer functionality.

  • spharpy: Spherical array signal processing.

  • pyrato: Tools for room acoustic analysis.

Please refer to pyfar.org for an up-to-date list of packages and their intended use cases.

We develop all packages openly across diverse research institutions, avoiding general branding of code or functionality. Contributions (in any form, i.e. reporting issues, submitting bug-fixes or implementation of new features) are welcome. To get in touch, either contact us via

Example: Measurement Based Equalization Filter#

In the following, the creation of a measurement based equalization filter is used as an example of showing pyfar’s functionality, namely

Subsequently, the example of a head-related impulse response will briefly introduce

Audio Data#

Audio data are the basis of pyfar and most data will be stored as Signal objects. Signals are intended for equally sampled time and frequency data. Examples for this are audio recordings and impulse responses, or corresponding spectra. Similarly, TimeData and FrequencyData objects are intended to store incomplete audio data, such as (third)-octave data or results from numerical simulations. A more detailed introduction to pyfar audio objects can be found here.

Signals can be created in different ways. One possibility is to use the pyfar.signals module. It contains functions for creating common signals like impulses, pure tone sine signals, swept sinusoids, and noise. Let’s create an impulse with a length of 32 samples.

[2]:
impulse = pf.signals.impulse(32, delay=2)

The underlying time and frequency data of the Signal object are stored as numpy arrays. We can access the time data with the time attribute.

[3]:
impulse.time
[3]:
array([[0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
        0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]])

Accessing the frequency data with the freq attribute will automatically compute the spectrum using the Fast Fourier Transform.

[4]:
impulse.freq
[4]:
array([[ 1.        +0.j        ,  0.92387953-0.38268343j,
         0.70710678-0.70710678j,  0.38268343-0.92387953j,
         0.        -1.j        , -0.38268343-0.92387953j,
        -0.70710678-0.70710678j, -0.92387953-0.38268343j,
        -1.        -0.j        , -0.92387953+0.38268343j,
        -0.70710678+0.70710678j, -0.38268343+0.92387953j,
         0.        +1.j        ,  0.38268343+0.92387953j,
         0.70710678+0.70710678j,  0.92387953+0.38268343j,
         1.        +0.j        ]])

Interactive Plotting#

Let’s have a look at the time signal and spectrum by using the pyfar.plot module. The plot functions are based on matplotlib, but adjusted for a convenient use in acoustics (e.g., logarithmic frequency axis, magnitudes in dB…). Also, pyfar provides convenient interactive keyboard shortcuts. These allow switching plots, zooming in and out, moving along the x and y-axis, and zooming and moving the range of colormaps

To do this, you need to use an interactive matplotlib backend. For jupyter notebooks one can for example use the matplotlib magic %matplotlib ipympl, which has been executed at the top of the notebook already.

A list of available shortcuts is found in the documentation or by executing the following function.

All following figures can be used as interactive examples. When the plot does not respond to the shortcuts, try clicking the figure to activate these for the figure.

[5]:
plt.figure()
pf.plot.time(impulse, unit='ms')
[5]:
<Axes: xlabel='Time in ms', ylabel='Amplitude'>

Import/Export#

Signals are often stored on disk. The pyfar.io module can read and write common audio file types such as wav and mp3. In addition, it can also import sofa files and data simulated with Comsol Multiphysics, and it allows saving and reading the workspace data including pyfar objects.

This is how to load the room impulse response (RIR) from a wav-file, we will use for creating an equalization filter in the following.

[6]:
rir = pf.io.read_audio(os.path.join(
    '..', '..', 'resources', 'impulse_response_measurement.wav'))
[7]:
plt.figure()
ax = pf.plot.time(rir)

Digital Signal Processing#

The pyfar.signals.files module provides signals such as speech, castanets, drums, and guitar snippets, e.g., to have more natural signals for a quick demo of an audio system. Here, we choose a short anechoic guitar sample to simulate playback at the measured listener position by convolving it with the RIR. Because the sampling rates of the guitar sample and RIR do not match, the guitar signal is resampled first.

[8]:
guitar = pf.signals.files.guitar()
guitar = pf.dsp.resample(guitar, rir.sampling_rate)
reverberant_guitar = pf.dsp.convolve(guitar, rir)
Loading guitar data. This is only done once.

The reverberant signal can also be listened to using the jupyter Audio widget.

[9]:
Audio(reverberant_guitar.time, rate=guitar.sampling_rate)
[9]:

Typically, it is not feasible to equalize for the exact frequency response a room, as notches and peaks are sensitive to local variations of the receiver position. A more sensible approach is using a smoothed magnitude spectrum for room equalization.

[10]:
# 3rd octave smoothing of the RIR
rir_smoothed = pf.dsp.smooth_fractional_octave(rir, 3)[0]

# inspect original and smoothed RTF
plt.figure()
ax = pf.plot.freq(rir, dB=True, label='Original RTF', color='grey')
pf.plot.freq(rir_smoothed, dB=True, label='Smoothed RTF')
ax.legend()
[10]:
<matplotlib.legend.Legend at 0x7f8ea9f7e020>

The FIR equalization filter can be generated from the inverse magnitude spectrum in a straight forward manner. Due to the roll-off below 80 Hz, and above about 18 kHz, regularization outside these limits is required to avoid excessive noise amplification. These frequency limits are passed to the inversion function. Outside the desired frequency range, a regularization value is chosen such that the resulting spectrum does not show roll-offs, steep slopes, ringing or a long filter decay.

[11]:
frequency_range = [100, 18e3]
rir_inv = pf.dsp.regularized_spectrum_inversion(
    rir_smoothed, frequency_range, regu_outside=5e-2)

# inspect smoothed and inverted RTF
plt.figure()
ax = pf.plot.freq(rir_smoothed, dB=True, label='Smoothed RTF', color='grey')
pf.plot.freq(rir_inv, dB=True, label='Inverted RTF')
# indicate frequency range of inversion by dashed vertical lines
ax.axvline(frequency_range[0], color='black', linestyle='--')
ax.axvline(frequency_range[1], color='black', linestyle='--')
ax.legend()
[11]:
<matplotlib.legend.Legend at 0x7f8ea9d033d0>

Since the smoothing returns a zero-phase filter, the phase response needs to be reconstructed to achieve causality. This is here achieved by using minimum phase. To guarantee no amplitude distortions of signals, the filter is further normalized.

[12]:
rir_inv = pf.dsp.normalize(rir_inv)
rir_inv_min_phase = pf.dsp.minimum_phase(rir_inv)

# inspect the minimum phase RIR with logarithmic y-axis
plt.figure()
pf.plot.time(rir_inv_min_phase, dB=True)
[12]:
<Axes: xlabel='Time in s', ylabel='Amplitude in dB'>

Finally, a time window is applied to crop the RIR to a typical FIR equalization filter length of 2048 samples.

[13]:
# apply time window and inspect the result
equalization = pf.dsp.time_window(
    rir_inv_min_phase, [1900, 2048], shape='right', crop='end', unit='samples')


plt.figure()
pf.plot.time(equalization, dB=True, unit='ms')
[13]:
<Axes: xlabel='Time in ms', ylabel='Amplitude in dB'>

Filter Classes#

Pyfar implements filters as encapsulated class objects, similar to the Signal class. Filter objects support storing sets of coefficients for FIR, IIR, as well as second order section IIR (SOS) filters. They further facilitate piece-wise processing of audio signals while storing the filter state. Many digital filters with various use-cases are already available. For these filters, convenience functions exist as well, combining creation and application of a filter to a signal into a single function call.

Here, pyfar.FilterFIR object is created manually using the previously created equalization filter.

[14]:
eq_filter = pf.FilterFIR(equalization.time, equalization.sampling_rate)
eq_filter
[14]:
2048th order FIR filter with 1 channel @ 44100 Hz sampling rate

The equalization filter is applied to the guitar sample using the filter objects process method.

[15]:
reverberant_guitar_equalized = eq_filter.process(reverberant_guitar)

The equalized audio signal can again be auralized using jupyter’s Audio widget.

[16]:
Audio(reverberant_guitar_equalized.time, rate=guitar.sampling_rate)
[16]: