Matched filtering animation

Get ready for the most boring animation, ever. EVER! (don’t say I didn’t warn you). I made this for a talk I gave for the MG&G lunch seminar a couple of weeks ago. I wanted to figure out a way to describe how a matched filter works, and found that I was doing lots of crazy hand gestures that weren’t helping me at all. Matlab animation to the rescue!

Let’s say you have your received time series, s(t), and a simulated version of the transmitted call, x(t). The cross correlator output is simply:

y(t) = \int_0^t x(\tau)s(T-t+\tau)d\tau

The function y(t) has peaks that correspond to onset times of the calls in the original time series.

In the top panel, the blue line is the received time series, and the red line is the simulated signal. Buried in the blue time series is the original signal, between seconds 2 and 3. It’s hard to see it by eye, but the matched filter plucks it right out! The black line in the second panel is the output of the cross correlation. The peak aligns exactly with the start time of the signal. Miracle!

Matched filtering video from Michelle Wray on Vimeo.

I know what you’re dying to ask me, and no, Pixar still has not been in touch.

Last day of inverse theory :-(

Today was our last inverse theory class.  We didn’t have time to cover the conjugate gradient method, which was too bad, I was really hoping we would get to that stuff.  One of our lab group’s weekly readings was on the double-differencing technique for earthquake location.  In it, they described using a conjugate gradient-type method called LSQR.   It’s supposed to allow you to avoid inverting a giant matrix when solving inverse problems.  From my very basic understanding, it finds the answer in a sort of iterative way, by making successive guesses to find the minimum in some objective surface.  The good news is that Matlab has a canned lsqr function, so I could just try it out and see for myself.  Although the canned function can be a bad thing, because then you can just treat it like a black box.

 

 

Molly Moon’s and more inverse theory

It was a long day, lots of code writing.  But it was fun, because I was working on an interesting inverse problem.  Probably not that interesting to people who know lots about inverse theory, but it’s new to me, so still very cool.  Since I’m not great at writing efficient code, it was a long 6 hours before I could see any results.  But in the end I got a really pretty picture that might tell me something about using earthquake positioning techniques to locate a whale far outside a seismic network.  Well, about 15 km outside a network that covers roughly 5km x 10km of seafloor in 2km water depth.  So pretty far.

And yes.  I am a day behind in my little drawings.  At some point I will surely skip a day.  But I’m trying to keep up for as long as I can.

Oh… what’s that you say?  You want to see the results of yesterday’s work?  Alright!  It might not make sense, and there is always a chance it’s just outright incorrect.  But at the very least, those are some dramatic colors, no?  I had to scale the color values though – basically the colors are in a log scale.  So where it says 10, it actually means 10^{10}, an 8 means 10^{8}, and so on.  So yeah, those position errors way outside the network are huge.  HUGE.

Late-night acoustic propagation modeling

The ASA conference session yesterday on acoustic propagation modeling really sparked my interest.  And since I can’t sleep right now, I decided to download Bellhop, and play around with it.  It’s been a long time since I’ve used Bellhop, so I am sort of re-learning the input file format.

I’m working with the Matlab wrapper for bellhop.  It’s not as fast as running the fortran code directly, but it’s easier to configure.  Just for fun, I grabbed the sample Munk sound speed profile, and chopped it off at 2000 meters depth.  I set up the .env file to compute the transmission loss over ranges up to 20 km away.

Here are those results, for 20hz.

I picked this range and depth and frequency to get an idea of the losses that might happen to a fin whale call near our ocean bottom seismometer network.  Of course, this it’s completely inaccurate – I don’t have the right sound speed profile or bathymetry.  Also, I think this model approximates the transmission losses for an infinite CW signal.  Which is not a very realistic approximation of a fin whale call.

Next up:  a better sound speed profile and maybe some bathymetry information.

Matched filters – Review

At the risk of revealing (again) my simple-mindedness, I thought I’d summarize some basics that I have had to review in the last couple of days.

As part of my research, I’m looking at seismic data collected by a network of ocean bottom seismometers (OBSs).  I’m not looking for earthquakes, though – I’m looking for whales.  Fin whales, and also some blue whales, since their calls are at lower frequencies that are within the seismometers’ bandwidth.  Fin whales are the focus for now – we see a lot more of them than the blue whales.

Dax Soule has spent a couple of years working with our advisor to develop code to detect whales, count them, look at the call statistics, and also to track them.   Dax will be moving on to other work (seismic tomography! so cool!), and I’ve been going through his code to understand what he did.  He’s got some really slick algorithms that I’m trying to incorporate into the next generation of the code that will allow us to look at similar data from other sites.

One of the new things that I added was a really basic matched filter/cross correlator.  And in order to do that, I had to remember what they were, and how they worked.  So I made up a couple of signals.  A chirp and a continuous wave pulse.  They were both the same length (1 second).  The chirp swept from 24 – 15 Hz, which is similar to a fin whale call.  The CW pulse was just at 20 Hz.

Here they are:  they’re the same length, and the amplitude of the random noise is the same. Just glancing at them, the signals look really similar.  But the cross correlation results are really different!

The chirp signal has a much narrower peak in the cross correlation result.  This is because, even though they are the same length, and are centered on the same frequency, the chirp has a larger bandwidth, and our ability to detect a signal improves with increasing bandwidth.

There’s something interesting happening here, though:  if you squint a little, and look at the general shape of the cross correlator output, it looks like a triangle in the first figure, and like a sinc function in the second.  But there’s a higher frequency signal living “inside” these bigger shapes.  When we’re trying to pick a peak, that higher frequency stuff really just gets in the way. So how do we deal with this?  One thing we can do to improve our picking ability is to baseband the signal using quadrature demodulation.  That means that instead of looking at a 20 Hz signal, we bump it down so that it’s centered at 0 Hz.  The basebanded signal just looks like an envelope over the original signal.

When you do this, the cross-correlator output looks much better, and it’s far easier to pick a peak.  Here’s the chirp cross correlation before and after basebanding:

Anyone who knows what this is all about will realize that I’ve done a huge amount of glossing over the details. But the good news is it seems to work.

Least Squares practice

I never did manage to resuscitate my old blog, but Google does cache some webpages, so I have access to my old posts that way for the time being.  I’m not too worried about most of them, but a few were pretty handy.  And since I’m interested in learning about inverse theory, this seems appropriate. So before it disappears forever, I will re-produce it here…

This post is actually a practice in least squares line fitting, linear algebra, Python, and Numpy, all mashed into one. In preparation for the possibility of learning geophysical inverse theory at some point in the future, I have backed up, waaay back to the beginning. Fitting a line to a set of points when there are more points than required to uniquely define the line.

I’m doing this example in 2D, and I’m basically following Gilbert Strang’s Linear Algebra, p 206-211 (section on least squares approximations).

I started with 3 points:
(0,6)
(1,0)
(2,0)

I saved these in a file called “points.txt” (just for flexibility)

I started my Python script (lspractice.py) in the normal way, importing what I thought I’d need:

#!/usr/bin/python
# Fitting a line to a set of points
# Following an example from Linear Algebra by Gilbert Strang, p 206-211
# The points are in a file called points.txt
from numpy import *
from pylab import *

–*don’t forget to make executable using chmod +x lspractice.py

Next, I opened the text file and filled in my x and y variables:

# Open text file (comma delimited x/y coords)
f = open('points.txt',"r")
lines = f.readlines()

xi = []
yi = []

for line in lines:
fields = line.split(',')
xi.append(float(fields[0]))
yi.append(float(fields[1]))

Next – setting up the model:


# Filling these points into the equation of a line (the functional model)
# y = C + Mt
# F1: C + M(0) = 6
# F2: C + M(1) = 0
# F3: C + M(2) = 0
# unknowns are C and M.  Take partial derivatives of each of the functions wrt C and M.  These results will populate the design matrix.

dFdM = np.array(xi)
dFdb = tile(1,shape(dFdM))

A = np.array([dFdb,dFdM])
At = np.transpose(A)

b = (np.array(yi))

And finally, working out a solution for C and M:


# The solution to this type of problem is At*A*xhat = At*b
# OR xhat = inverse(At*A)*(At*b)

xhat = np.dot(  linalg.inv(np.dot(A,At))  ,  np.dot(b,At))

And a little sanity check:

# Now a little graphical check
xtest = arange(min(xi)-3,max(xi)+5,1)
ytest = xhat[0] + xhat[1]*xtest

plot(xi,yi,'o')
plot(xtest,ytest,'.-r')
grid()
title('Fitting a line to 3 points using <strong>least</strong> <strong>squares</strong>')
show()
Least squares

So this is what it looks like in the end. The original points are shown as blue dots. The line described by the parameters that minimize the sum of the squared residuals is in red. Yeah, it’s embarrassingly simple, but at least it’s a baseline to go on.

Also worth noting – I have not fully figured out Python/Numpy’s matrix multiplication yet. In order to get it to work, I had to use numpy.dot(B,A) when I wanted to do A*B (I had to reverse the order of the arrays). According to the documentation this should not be the case. So I’m stumped. Maybe I’m just not getting the array dimensions right to begin with. That’s what I get for being a Matlab user I guess 🙂

FFT Review

I haven’t done signal processing of any sort for a while now (that’s not to say I ever did very much of it) – but I occasionally find myself needing to do some filtering or frequency spectrum analysis. And as usual, I always need to look up how I did it before. I should really write myself a little cheat sheet. But since I don’t have time for that now, here’s a quick link: FFT Tutorial. It’s from someone in the EE department at the University of Rhode Island. I had a quick look and I like it because it provides some theory, and also a Matlab example (and it’s pretty clearly written using LaTeX – yeah!). And while I’m on the topic of signal processing, here’s a link to a tutorial by Richard Lyons: “Quadrature Signals, Complex but not Complicated“. I like this one because it has a movie trivia question on page 3. And I totally knew the answer without looking.

Interpolation using Scipy/Numpy

As part of a little script I’m writing, I need to do some simple linear interpolation. The Matlab equivalent of what I’m doing is called ‘interp1’. So far I’ve come across two ways to do 1-d, linear interpolation. One is ‘interp’ in numpy. The other is ‘interp1d’ in scipy.interpolate. I’m not completely sure of the difference. Is one faster? Is one older? I’ve googled around a bit, but still haven’t found a clear answer. For now, I’ve implemented ‘interp1d’, which is less similar to Matlab’s ‘interp1’. You first define an interpolated object given your x and y vectors. Then you call that object with your new x-values to generate the new y-values.

And here is an example, chopped out of my code:

[sourcecode language=”python”]
depthvector1 = r_[0:nadirdepth:DepthIncrement]
interpfun1 = interp1d(depth1,ssp1) # my x and y vectors
sspvector1 = interpfun1(depthvector1) # find the new y’s for this x
[/sourcecode]

“Repmat” function in Numpy/Scipy?

I’m such a Matlab user. I just want to repeat an array or matrix X times, which I can do easily in Matlab using ‘repmat’. I’m sure this is super easy, and I’m just not seeing something obvious. Any tips? Is it something like concatenating copies of an array with itself? vstack?

Hey – I just found what I was looking for: I can do what I need to do here using the ’tile’ function in numpy. It works like this:

from numpy import *

x = array([1,2]) # create a simple array
bigx = tile(x,(2,1))

And the output is:
array([[1,2],
[1,2]])

Awesome! Now I can get rid of those pesky nested loops!

Also something interesting that I just discovered while messing with this in iPython: even if ‘x’ in this example is not created as an array (say it’s a list or a tuple) – it doesn’t matter – numpy still understands it, and then outputs an array object! Amazing. 🙂