FRidh's blog

Discovery consists of seeing what everybody has seen and thinking what nobody has thought -- Albert Szent-Gyorgyi

Research visit to NASA Langley Research Center

In March I was at NASA Langley Research Center in Hampton, Virginia, for a four week research visit. The goal of the visit was to exchange knowledge on auralization of aircraft and atmospheric turbulence modeling. During this period we improved a model I had been working on and demonstrated that this model can be incorporated in the NASA Auralization Framework (NAF).

I was staying with a nice family in the town of Poquoson. While I've grown up nearby the coast I had never lived this close to the waterside before. I like the strong wind and smell of salt. It was an interesting experience living in this part of the United States and quite unlike the places I've lived in so far.

SONORUS update

It's been quite a while I've written on this blog about SONORUS, or anything else in fact. A lot has happened since then.

In October and November I was at Chalmers again for 6 weeks, following some courses. Furthermore, we had the school on Auralization and Visualization as well as a workshop around the Gothenburg test site.

After the research visit I attended NixCon in Berlin. NixCon was a conference about the Nix package manager and the ecosystem around it. Ever since I found out about Nix it fascinated me. Last summer I began using Nix and in autumn I switched to NixOS, an operating system entirely managed with Nix.

In January we had another SONORUS workshop, this time in Rome. It was my first time in Rome and I was glad to finally see the Colosseum as well as the Apostolic Palace.

In March I went to the U.S. for a four week research visit to NASA Langley Research Center. I wrote more about that in another post.

And just a couple of days ago we had another workshop, this time in Antwerp, Belgium.

The SONORUS project is almost over. In September there will be a final meeting in Munich. More about that later...

Synthesis of aircraft emission and propagation

This is a repost of a post of mine at the SONORUS blog

In an earlier post I gave an outline of my project, and mentioned that I am currently working on an aircraft emission model for auralization. Since then I developed a fully automated method to extract features from aircraft recordings.

The method is roughly as follows:

  • Backpropagate from source to receiver in time-domain, undoing the Doppler shift, atmospheric attenuation and spreading. The ground effect is ignored for now. We now have a signal that roughly corresponds to what is emitted from the airplane.
  • Determine fundamental frequency. An aircraft spectrum consists mostly of noise and tones, which are mostly harmonics. Knowing the fundamental frequency, allows you to determine the power of not only the fundamental, but also of each harmonic.
  • And that is the final step. Determine power of the tones, and consider the rest of the spectrum as noise.

The plan is now to determine these features from a large amount of events and develop a basic emission model. But before then, its important to already ask the following question. Are these features sufficient to create a realistic emission signal? Or, even better, are these features, taking into account the developed propagation model, providing a realistic signal at the receiver? Does, what you hear, really sound like an aircraft? That's in the end what we're after, right.

Synthesis of emission

To test whether the obtained features are sufficient to reproduce a realistic auralization, I considered a couple of events, and extracted the features for each. I synthesized an emission signal by summing all these time-varying components (about 200 tones and 30 1/3-octave bands), linearly interpolating the samples (features were obtained every second) and smoothing the interpolated components.

Click here to listen to a synthesis of the emission. It's an Airbus A320 taking of from Zurich Airport.

The first part of the fragment you can hear the tonal components quite clearly but in the latter part noise takes over. This is due to the directivity of the components; tones generated by the fan blades radiate mostly forward.

For comparison, here is the backpropagated signal. You'll notice it sounds very different. This is because the (unwanted) ground effect is still in it. And because of it, it sounds much more already like an actual aircraft fly-over!

At the receiver

So, let's listen now at a receiver at a height of 4 meters above the ground. When the aircraft is closest by, the distance is about 180 meters. Here are two fragments: fragment A and fragment B. One is a full auralization, the other is a recording. Can you hear which is which?

(hint: look at the URL in the top part of your browser if you want to know for sure)

There are two quite noticeable differences between the recording and the auralization. This specific auralization does not include the effects of turbulence, and the level of the blade passing frequency is lower than it should be.

Swiss Air Force demonstrations in Duebendorf

The headquarters of the Swiss Air Force is located in Duebendorf, and the airstrip of the base is not that far from where I work. Often you can see paratroopers and helicopters or hear other aircraft take off.

Occasionally, one of the demonstration teams practices. So far I've only seen the PC-7 Team, which use Swiss-manufactered Pilatus PC-7 turbo prop trainers. These aircraft can be quite noisy. Actually, it's not that the sound is very loud, it's just that the sound can be annoying. The propellors have a strong directivity and therefore, as they do their maneuvers, the level goes up and down.

They don't practice that often, and their sessions generally don't last more than a couple of hours. Therefore, I can't really be bothered too much by it. If I hear them, and it bothers me, I just close my window or turn the music up a bit. But what astonishes me is that they actually do these activities above dense populated areas. Many people are living and working in the area. What if an accident occurs?

I realize there aren't many places in Switzerland where they could build airstrips and do this kind of training without flying over someone's home. But flying over a dense area like this or flying over one or two farms is quite a difference. I am curious now to their risk assessment and how they've eventually decided to keep doing these activities here.

To conclude, these activities can be noisy, and perhaps also dangerous. However, on a more positive note, they also provide good entertainment during your break!

Development of an emission model

On June the 2nd I will present a paper, titled Determining an Empirical Emission Model for the Auralization of Jet Aircraft, at the Euronoise conference in Maastricht, The Netherlands. My presentation will be in the Auralisation of urban sound session. The conference is just a couple of weeks away from now, and I am still analysing and gathering results. Nevertheless, I thought it would be nice to give a bit of an insight on what I'm working on now and what I will present at Euronoise.

Emission model for auralizations

Currently I'm developing an emission model for jet aircraft that can be used for auralizations. Existing emission models for noise prediction generally predict sound pressure levels in 1/3-octaves. This works for noise prediction, however, for auralization a finer resolution is needed. Indeed, one needs to be able to model individual tones and in certain cases also modulations.

At Empa we have in the past conducted measurements for the sonAIR project resulting in a large dataset including audio recordings at multiple sites, cockpit data and flight track data. Colleagues of mine are using this dataset now to develop a next-generation emission model for noise prediction and I'm using this dataset to develop an emission model for auralizations.

Analysis of an event

Let's now have a look at one specific event and how I analyse such event. I've included the Python code I use for the analysis, so you get a better idea of how I'm working. To give you an idea, there is over 500 GB of audio recordings and several hundreds of MBs on other data. The audio is stored in a single hdf5 file using h5py and all other data in a SQLite database. All data is handled using the amazing Blaze and pandas modules.

In [1]:
from sonair.processing import *
%matplotlib inline

We begin by loading the data belonging to a combination of event (i.e. aircraft passage) and receiver and from a certain start and stop time. start and stop are here seconds relative to a certain event reference time. This reference time is the time at which the aircraft is closest to any of the receivers and turned out to be quite conventient to use.

In [2]:
event = '10_004_A320'
receiver = 'A'
start = -5.
stop = +5.

analysis = EventAnalysis(event, receiver, start, stop)

We now have a nice object that gives easy access to all the data. For example, we can request the atmospheric pressure (in mbar) during that event

In [3]:
analysis.event.pressure
Out[3]:
977.39999999999998

or the coordinates of the receiver (Swiss grid)

In [4]:
analysis.receiver.x, analysis.receiver.y, analysis.receiver.z
Out[4]:
(682692.67099999997, 257054.26800000001, 422.048)

Obviously, we can also listen to the recording

In [5]:
from IPython.display import Audio
Audio(data=analysis.recording_as_signal, rate=analysis.recording_as_signal.fs)
Out[5]: