If you play back this audio file, you’ll be listening to a couple of fin whales and an earthquake, and according to my dog, that’s some exciting stuff. No, seriously, I played it on my laptop the other day, and Trooper got all agitated, and started growling and barking in what I can only assume was confusion. (“where’s the fin whale?? it’s got to be around here somewhere!”)
The colorful figure at the top of the post is called a spectrogram. Time marches across to the right. Frequency increases upward. And the colors basically indicate loudness – brighter colors are louder. This particular chunk of data was recorded in the dead of winter just off the coast of British Columbia, Canada, under more than a kilometer of water. Even though the instrument is designed to measure earthquakes, it also picks up the very low, booming calls of fin and blue whales. The spectrogram shows two slightly different calls alternating – one slightly higher pitched and one slightly lower. We believe this is probably two fin whales passing near the seismometer.
You might notice that the audio clip is about 30 seconds long, but the spectrogram shows five minutes of data – that’s because the calls are down around 20 Hz, which is at the very lower end of the human hearing range. (If you have tip top hearing, you are probably sensitive to sounds between 20 Hz and 20 kHz.) I sped up the audio by a factor of 10, so that we can actually hear it – bloop… bloop… bloop…
At about 10:03am in the spectrogram, and 20 seconds into the audio recording, you can see/hear an earthquake in the background. My seismologist colleagues tell me that this isn’t the distinctive crack of a primary or secondary phase arrival from an earthquake, but possibly the rumbling caused by a tertiary, or “T phase”, arrival.
So why did Trooper have a meltdown? I guess you would too, if you thought you were suddenly surrounded by a couple of super high-pitched fin whales and an earthquake.
When I was an undergrad, I had to take a year of geophysics, and on the first day, the professor explained that geophysics meant finding things underground without digging. The rest of my lab group here at the UW are all marine geophysicists, and they routinely have to find things underground without digging, but with the added complication of having to get through a couple of kilometers of water, too.
You may remember my interview with Rob Weekly a few weeks back. This week’s interview is with my office-mate, Dax Soule. Dax studies marine geology and geophysics in the School of Oceanography at the University of Washington. Like Rob, he focuses on active source seismology. In fact, he’s looking at some of the same data as Rob, just different parts. And since we already went over a lot of the basic background in Rob’s post, I’m going to jump right into some fun tomography stuff. In a nutshell, tomography means building up a 3D picture of what’s underground by measuring how long it takes for seismic waves to travel through. In tomography, you’re not sampling the material directly, but you’re measuring how the seismic velocity varies. Seismic velocity is a measure of how quickly a seismic wave can travel through a certain type of rock, so if you know the velocity structure you can make inferences about the geological structure.
Imaging the earth’s crust
When you’re doing seismic tomography, it’s a bit like peeling back the layers of an onion: you have to figure out the shallow stuff before the deep. Rob’s work focused on obtaining a three-dimensional seismic velocity structure from the seafloor down to about 2.5-3 km depth. Dax is taking Rob’s results and extending downward to include the entire depth of the crust. He’s looking between the seafloor and the Mohorovicic discontinuity, or the “Moho”, which is the boundary between the earth’s crust and the mantle.
In seafloor maps of the region, you can see that there’s a bathymetric high – a plateau where the water depth is shallower than the surrounding seafloor. A seismic reflection study a few years back  showed that this plateau likely corresponds to a thickening of the oceanic crust at this location. Dax’s tomography work will help to clarify exactly what’s going on, and will give a more detailed picture of how the crustal thickness varies in the area.
Dax explained that measuring crustal thickness variations near a mid-ocean spreading center can tell you about how that crust was produced – was a episodic or constant? And where was the source of new crust – was it on the axis of the ridge, or off-axis?
How the experiment works
Here’s a little cartoon showing the basic geometry of the seismic experiment.
A seismic source is generated near the sea surface, and the energy travels through the water column and into the sea floor. Once in the earth’s crust, the energy is converted into to types of waves: primary waves (p-waves) and secondary waves (s-waves). There are many different types of paths that these waves can take as they travel through the earth, and each type of path is known as a “phase”. In the above figure, the two phases that are shown are Pg and PmP. Rob used the Pg phase arrivals to image the upper portion of the crust. Dax looks at the PmP phases, which are the ones that bounce off the Moho.
Dax goes through the data and manually picks out the PmP phase arrival times. This part of the job is not the most exciting, but someone’s got to do it! Fortunately, all that work does pay off, and once Dax has his picks, he can start digging into the tomography part of his work.
Five easy steps
Here’s a handy-dandy summary of the steps that Dax goes through to build up a tomographic inversion.
So that’s what my office-mate does… good to know! Dax is still working through the data and the very complex inversion code, so stay tuned for a future post on what he finds deep in the crust, and what the crustal thickness can tell us about seafloor production at the Endeavour Ridge.
 Carbotte, Suzanne M., Mladen R. Nedimović, Juan Pablo Canales, Graham M. Kent, Alistair J. Harding, and Milena Marjanović. “Variable crustal structure along the Juan de Fuca Ridge: Influence of on‐axis hot spots and absolute plate motions.” Geochemistry, Geophysics, Geosystems 9, no. 8 (2008).
Now that I’ve finally descended from my little soapbox (see this post), I guess it’s time to take my own advice. I was inspired to go for it yesterday when I came across this blog post by Jessica Carilli: Why Geochemistry is Awesome.
Just last week(ish) my very first paper was published. (Huzzah!) If you feel so inclined, you can check it out for yourself here. However, even I’ll admit that it’s a bit technical for someone who really just wants to get a big picture overview. Here goes nothing!
“Source levels of fin whale 20 Hz pulses measured in the Northeast Pacific Ocean”
If you follow my blog, you probably already know that I’m pretty into fin whales. They’re amazing and huge and, unfortunately, endangered. And if we want to monitor their recovery and to avoid further risks from things like ship noise, ship strikes, and fishing activity, we need to have an idea of where they are and how they move around.
Problem is, despite being the second largest animal in the entire world, they are tough to keep track of. They spend a lot of time far from the coast and a lot of time under water. There are a few different techniques people use to study fin whales (and other whales, for that matter). These include visual surveys (from a boat or plane or land), radio and satellite tagging, and (of course) acoustics. Each of these methods has its pros and cons. Passive acoustic monitoring is good because you can deploy instruments that can monitor for long time periods (months and even years), and you don’t need to worry about conditions at a certain time of year, or whether the animals are visible at the surface.
Let’s imagine for a moment that you have a hydrophone (which is an underwater microphone, essentially) and you want to listen to some fin whales.
There you are, hydrophone off the side of the boat, and you get lucky – you record a fin whale call! Woo hoo! Just hearing it is great. But something else that is extremely useful is to be able to say how far away that whale was when it made the call (R in the picture). That’s where some basic acoustics comes in. If you measure how loud the call is at the hydrophone, and you know how loud the call was when it was generated, and you know something about the acoustic properties of the water, you can figure out how far away it was. You can think of it like this: if you’re in a big room, and you shut your eyes, and someone else in the room starts talking, you can tell roughly how far away they are just by how loud their voice sounds in your ears.
Here’s the math, dead simple, I promise:
In that equation, ML is “measured level”, SL is “source level”, and TL is “transmission loss”. If you re-arrange the equation solving for TL, you get:
What’s the deal with transmission loss? Well, it accounts for the acoustic energy that is lost between the source (the whale) and the receiver (the hydrophone). And it’s dependent on the range – the further apart the source and receiver, the greater the transmission loss. Here’s the equation for transmission loss:
. (EQ 3)
What all this boils down to is that if you know the source level (SL) and measure the receive level (ML), you can calculate transmission loss (TL), which you can then use to calculate range. Awesome! Except for one thing… source levels of fin whale calls are not really well known. There have been very few published papers reporting fin whale source levels (e.g. ,,). These results are useful, but because of the difficulties inherent in estimating source levels, relatively few calls were used in the final estimates (at most, 83).
A BIG FAT DATASET
This is where a BIG FAT DATASET really comes in handy. And that’s just what we have: a three year ocean bottom seismometer (OBS) deployment off the coast of Vancouver Island. That’s 8 seismometers, collecting data for three years, at 128 samples every second. And we see TONS of fin whale calls, especially during the winter months.
Fin whales have a very distinctive call. It’s very low frequency, very loud, and only lasts for about a second. And typically, a fin whale will make this call about one time every 25 seconds (although it does vary by location). A couple of things make this call particularly handy for me. First of all, they are so low frequency that they get picked up on OBSs. Second, they are very similar from one call to the next, making an automatic detection algorithm relatively straightforward. That means I can write code to tell my computer what the call looks like, and it will run through the data and find instances where that call shows up. With hundreds of thousands of calls on 8 instruments, searching for the calls manually would take a hundred graduate students like a hundred years. Okay, that’s an exaggeration, but it would take a LONG time.
How do we use this data to get at source levels? The thing that makes it possible is that, in the first year of this three year dataset, we actually know where the whale is at the time of the call. This is because my office mate, Dax, did his masters work on tracking whales near the OBS network. Since we know where each call was generated, and we know where the seismometers are, we can calculate the range between source and receiver.
If you look back up to Equation 3, you can see that if we know the range, R, we can calculate the transmission loss. And with TL and ML, we can estimate source level! See, that wasn’t so bad…
A Slight Complication
If you know a bit about acoustics or seismology, you might have seen this coming: OBSs don’t measure acoustic pressure level (ML) directly. They actually measure ground velocity. The amplitude of the ground velocity is definitely related to ML, but it’s dependent on what the seafloor is made of at that location, and angle at which the incoming sound hits the seafloor. I would say that this is the most technical and complicated part of my paper, and since it is not critical to understanding the results, and also because it would take a long time, I will leave this for a separate blog post at some future date. (if you’re especially curious, I encourage you to check it out in the actual paper).
The moment you’ve all been waiting for…
Or, um, you know. Maybe not… Anyway – the results! A total of 1241 calls on 32 whale tracks were used to estimate source levels. The mean source level was estimated to be 189.9 +/-5.8 dB re 1uPa @ 1m (see below for an explanation of this notation). This is within the range of previous estimates, although slightly in the loud side. The most recently published results were in 2008, where fin whale call source levels were measured in the Southern Ocean . They found a mean source level of 189 +/- 4 dB, based on a total of 83 calls.
As part of the analysis, we looked at the variation of source levels over the duration of a dive and also between tracks. We were surprised that we didn’t see any obvious trend over a dive – we expected that maybe as the whales ran out of breath, their calls would get progressively quieter, but we didn’t see any evidence of that.
Where the slop comes from
Part of reporting scientific results includes keeping track of where the uncertainties in the results come from. The biggest contributors in this analysis were: 1) uncertainties in the location of the whales at the time of the call and 2) interference between the direct path acoustic arrival, and the “echo” that bounced off the sea surface. Other potential sources of error include: estimated seafloor properties used to convert ground velocity to acoustic pressure level, sound speed profile, differences in the coupling between the seismic instruments and the seafloor.
The end… for now
And there you have it – that’s the gist of my paper. I would love to dig into more of the analysis of the amplitude variations along tracks, and between individuals. Maybe an even bigger “big fat dataset” would allow me to tease out additional clues…
** In my explanation above, I reported results as 189 dB re 1uPa @ 1m. If you don’t study acoustics, that will probably look pretty mysterious. Here’s what it means: dB is decibels, which is a measure of loudness. Decibels are measured as a logarithmic ratio of pressures:
Pmeas is the pressure you’re measuring, and Pref is a reference pressure. The reference pressure in water is 1uPa (micro Pascal) at a distance of 1 meter from the sound source.
 Širović, Ana, John A. Hildebrand, and Sean M. Wiggins. “Blue and fin whale call source levels and propagation range in the Southern Ocean.” The Journal of the Acoustical Society of America 122 (2007): 1208.
 Charif, Russell A., et al. “Estimated source levels of fin whale (Balaenoptera physalus) vocalizations: Adjustments for surface interference.” Marine Mammal Science 18.1 (2002): 81-98.
 Watkins, William A., et al. “The 20‐Hz signals of finback whales (Balaenoptera physalus).” The Journal of the Acoustical Society of America 82 (1987): 1901.
Get ready for the most boring animation, ever. EVER! (don’t say I didn’t warn you). I made this for a talk I gave for the MG&G lunch seminar a couple of weeks ago. I wanted to figure out a way to describe how a matched filter works, and found that I was doing lots of crazy hand gestures that weren’t helping me at all. Matlab animation to the rescue!
Let’s say you have your received time series, , and a simulated version of the transmitted call, . The cross correlator output is simply:
The function has peaks that correspond to onset times of the calls in the original time series.
In the top panel, the blue line is the received time series, and the red line is the simulated signal. Buried in the blue time series is the original signal, between seconds 2 and 3. It’s hard to see it by eye, but the matched filter plucks it right out! The black line in the second panel is the output of the cross correlation. The peak aligns exactly with the start time of the signal. Miracle!
I’m finally getting back into the swing of things at work – lots of coding, so I’m happy. I must admit that I didn’t actually get through the wave equation derivation – I got distracted… to be continued…
Hey, here’s my drawing from yesterday. It was an inverse theory day. I think that the conference got me all fired up about inverse theory for some reason. It might not be the best idea ever to change my term project two weeks before the end of the term… but my other project is sort of boring.
I’m in the process of writing code to do a couple of things. The first is to locate a whale (or any source) in the water column using earthquake location techniques. I’m assuming that I correctly pick the direct path arrival. So that part should be easy. I’m running it several times on an array of grid points. So for each point I get a cluster of detections, and then I grab the eigenvalues and eigenvectors to get the semi-major and semi-minor axes (with orientation) of the error ellipse. As you can imagine, this takes a long time. It’s an iterative least squares problem, being done like 6500 times * 50 iterations for each time. And 50 is sort of low. Hooray for the brute force method! Here’s a little peak at just one of those iterations. Because the whale is not in the network, the position is not great. It’s really difficult to resolve the range, in particular, although the bearing seems better constrained.
The second bit of code I’m writing up is not actually finished yet. Or started. All I have is the math, which tends to be the tough part anyway. The whale call arrives at our seismic network via several paths. There is sometimes a direct path arrival, but often multipaths, which have interacted some number of times between the surface and the bottom. The multipath structure will change depending on where the whale is, and theoretically, it is possible to back out at least a range and depth using the multipath arrival times. Again, this is a problem that has been solved before. But it’s fun to figure it out for myself.
Some other things I’ve been thinking of trying:
Combine several range solutions from the multipath arrivals to locate the whale. This shouldn’t be very hard. It’s just like positioning a pinger on the bottom of Portsmouth Harbor!
Implement some kind of adaptive tracking algorithm… I feel some Kalman filtering coming on…
Part of research is, of course, figuring out what other people in my field have done, and are doing. Today I was looking up how other people have been looking for fin and blue whales acoustically. It turns out that the Bioacoustics Research Program within the Cornell Lab of Ornithology is a good place to start. Don’t be fooled by their name – they’re not just looking at birds. The research overview portion of their website describes research that they do on elephants and whales, too.
One of their areas of research is in looking at low frequency whale calls using the SOSUS network. SOSUS stands for SOund SUrveillance System, and was used during the Cold War to monitor submarines and surface vessels. SOSUS is part of the larger IUSS network (Integrated Undersea Surveillance System). The US Navy has been working with Bioacoustics researchers at Cornell University to look for whales since 1993.
Here’s an example from their website, showing different types of whale calls that they see. They’re displayed as spectrograms, where the x-axis is time, and the y-axis is frequency.
Because of the sample rate of the seismometers in the KECK network, the data I’m working with are limited to 64 Hz. The data shown above includes minke and humpback whales, which are in the range of 100-300 Hz. Low compared to, say, a multibeam sonar, but much higher than what I can see in my data.
Several of the researchers at BRP went down to the Sea of Cortez to listen to fin whales, and to try to understand how human-made underwater sound affects whales. Using a variety of methods – not only acoustic – they observed that only the male whales were vocalizing. Here’s the one-sentence summary from their website:
This results of this research suggest that the long, low-frequency songs of male fin whales function to attract females to dense patches of food, where mating then occurs.
This is interesting. Why is it only the males that sing? And why do they attract females to patches of food to mate? I’ll have to read up on this…
At the risk of revealing (again) my simple-mindedness, I thought I’d summarize some basics that I have had to review in the last couple of days.
As part of my research, I’m looking at seismic data collected by a network of ocean bottom seismometers (OBSs). I’m not looking for earthquakes, though – I’m looking for whales. Fin whales, and also some blue whales, since their calls are at lower frequencies that are within the seismometers’ bandwidth. Fin whales are the focus for now – we see a lot more of them than the blue whales.
Dax Soule has spent a couple of years working with our advisor to develop code to detect whales, count them, look at the call statistics, and also to track them. Dax will be moving on to other work (seismic tomography! so cool!), and I’ve been going through his code to understand what he did. He’s got some really slick algorithms that I’m trying to incorporate into the next generation of the code that will allow us to look at similar data from other sites.
One of the new things that I added was a really basic matched filter/cross correlator. And in order to do that, I had to remember what they were, and how they worked. So I made up a couple of signals. A chirp and a continuous wave pulse. They were both the same length (1 second). The chirp swept from 24 – 15 Hz, which is similar to a fin whale call. The CW pulse was just at 20 Hz.
Here they are: they’re the same length, and the amplitude of the random noise is the same. Just glancing at them, the signals look really similar. But the cross correlation results are really different!
The chirp signal has a much narrower peak in the cross correlation result. This is because, even though they are the same length, and are centered on the same frequency, the chirp has a larger bandwidth, and our ability to detect a signal improves with increasing bandwidth.
There’s something interesting happening here, though: if you squint a little, and look at the general shape of the cross correlator output, it looks like a triangle in the first figure, and like a sinc function in the second. But there’s a higher frequency signal living “inside” these bigger shapes. When we’re trying to pick a peak, that higher frequency stuff really just gets in the way. So how do we deal with this? One thing we can do to improve our picking ability is to baseband the signal using quadrature demodulation. That means that instead of looking at a 20 Hz signal, we bump it down so that it’s centered at 0 Hz. The basebanded signal just looks like an envelope over the original signal.
When you do this, the cross-correlator output looks much better, and it’s far easier to pick a peak. Here’s the chirp cross correlation before and after basebanding:
Anyone who knows what this is all about will realize that I’ve done a huge amount of glossing over the details. But the good news is it seems to work.
Once again, I’m doing a bit of signal processing and, as usual, I found myself needing to brush up on the basics. And one of my favorite little tutorials is called Quadrature Signals: Complex but not Complicated. It’s written by Richard Lyons, who’s also the author of what I’ve been told is a good book on signal processing. I like the writing style – it’s fun and easy to read. Well, okay, honestly it gets pretty complicated by the end. But I still like it.
Last night I did my first cross correlation since 2008. Woo hoo! And it worked! Sort of. It still needs some tweaks, and I also need to improve my input signal. But it feels good to be using the signal processing toolbox in Matlab again 🙂 Once I get it sorted out a bit more, I hope to post the relevant snippets of code up here.
I haven’t done signal processing of any sort for a while now (that’s not to say I ever did very much of it) – but I occasionally find myself needing to do some filtering or frequency spectrum analysis. And as usual, I always need to look up how I did it before. I should really write myself a little cheat sheet. But since I don’t have time for that now, here’s a quick link: FFT Tutorial. It’s from someone in the EE department at the University of Rhode Island. I had a quick look and I like it because it provides some theory, and also a Matlab example (and it’s pretty clearly written using LaTeX – yeah!). And while I’m on the topic of signal processing, here’s a link to a tutorial by Richard Lyons: “Quadrature Signals, Complex but not Complicated“. I like this one because it has a movie trivia question on page 3. And I totally knew the answer without looking.