Simple Fourier Transform demo


(Go here to access the interactive visualization:

This little project has kept me entertained over the holidays! I thought it would be useful to come up with a really simple interactive web visualization to illustrate what a Fourier Transform is. And of course, it’s also been a great way for me to get some javascript practice.

It’s a really simple example, and could definitely be improved – with the ability to add more signals, for example. And design improvements.

If you want to check out the code, it’s up on my Github page. I decided to try out Plotly for the graphs, and used bootstrap for the layout.

Wave interference

I’m TAing a marine GIS/ocean mapping course this quarter, and will be teaching a lecture on some aspects of multibeam sonars and the data that you get from them. Which is fun since I haven’t really thought much about multibeam sonars since I worked at a sonar company about 6 years ago.

I was trying to figure out how to explain how a longer array means you can get a narrow beam. It’s all about interference patterns, right? So I wrote a little script in Python (you can access the script directly on my Github page).

Let’s say you have two elements spaced a half wavelength apart. You get something like this, with one main lobe:


Cool – there’s just one main lobe of higher amplitude. But then if you pull those elements just a bit further apart – I’m showing a 5 wavelength separation here – you can see a completely different pattern:


So: the wider separation made more beams and and those beams were narrower. Interesting…. What if we wanted to have just one main beam that we could maybe steer around? (ahem. maybe a little like a multibeam sonar??) The next picture shows a single beam produced by a line of 20 elements, all spaced at half a wavelength apart from each other. This time we’re zooming out a bit – showing 30m x 30m this time. Also this simulation shows what it would look like with a 200 kHz signal – which is a pretty common frequency for a shallow water multibeam sonar.

Beam pattern example

Adventures in javascript


After about two frenzied weeks of muddling through javascript and D3 (and html and css) Helena and I managed to wrap up what I think is a pretty neat D3 visualization of oceanographic data for the class we’re taking.

Check it out here: – just click on the map to see the temperature and salinity profiles. And if you’d like to see what’s going on under the hood (or re-create it from scratch), the whole thing is up on our github page (along with a fairly detailed readme file describing our process).

Helena has done some neat D3 stuff before – check it out on her site here.


I’m back! Maybe?

Let’s see, it looks like my last post was in November 2014. What can I say? Sometimes life gets crazy for a bit. It’s still crazy now, but I’m excited about some things that are happening. For example: I’m nearing the end of my PhD! I’m tentatively planning to defend in the next few months. Also, I’m taking a really fun (and very intense) data visualization class. So far I’ve had a chance to play around with Tableau and Trifacta Wrangler. But the biggest learning curve, but with perhaps the biggest reward (ie. awesome interactive web graphics) is D3. I’m hoping to post some stuff as I learn…

For today, I’m not making my own comic. But Helena sent me a link to this xkcd comic that captures my experience with git so far.


Learning HTML, CSS, and PHP

I can’t tell you how often I find myself wishing I could design my own web pages.  Or even tweak things like WordPress templates.  It’s a skill that I’ve wanted to work on for a long time now.

I actually started digging into this almost a year ago, when my sister and her husband visited, although things have been busy and I have abandoned it for some time now. Christian is really into web programming, and is also quite patient when it comes to sharing his knowledge. He suggested checking out a couple of books to get started:

1.  CSS: The missing manual

2.  PHP and MySQL for Dynamic Web Sites

3.  Javascript and jquery

I’ve started with the first two, just working through the examples.  I don’t think of myself as a total tech dummy, but I have to admit – I have very little knowledge of the inner workings of websites.  It’s actually a lot of fun to go through these books and see what’s going on in the background – so complicated, but so logical.

A couple of other things to mention:  Although it would be pretty awesome, I can’t actually afford the Adobe Design and Web premium (even though it’s on massive discount for students – I will have to get it before I graduate!)  In the meantime, I’m happy to explore what my options are in the land of open source.  I found what seems to be a great alternative to Dreamweaver – it’s called Aptana, and it’s totally free!  It also directly links up to Firefox’s Firebug debugger, which is handy.  Of course, never having used Dreamweaver, it’s tough for me to give a fair comparison.  But it’s working just fine for me now.

Latex and JASA – more details

I had some questions about the details of getting the Jasatex files in the right place on your computer. If you download and unzip the file located at, you will see instructions for either Unix/Linux, or for a Miktex install. I think the instructions are quite clear, but when I did it the first time, I copied the folders into the right directories and it didn’t work. I finally realized I needed to update the tex databases, which can be done by executing the following on the command line:

sudo texhash

And presto, it worked!

Whale call – surface bounce interference

It feels like I’ve been doing nothing but estimating source levels lately (exciting times!). I started to outline it in the research section of this site (See Source Levels) – although that was ages ago and is now a bit outdated.

Measuring source levels of marine mammal vocalizations is complicated, and even more so when using ocean bottom seismometers. I’m trying to look at some of these complications and sources of error and uncertainty.

One of these is the interference between the direct path arrival and the surface bounce. Because they take different paths to reach the receiver, they arrive at slightly different times. These offsets result in interference patterns – sometimes constructive and sometimes destructive.

R_1 and R_2 are the direct path and the surface bounce. D_s and D_r are the source depth and receiver depth. H is the horizontal distance between the source and the receiver.

I wrote some code to model these effects, and I’ll just start out with a little video clip that shows what I mean by constructive and destructive interference. I’m plotting the RMS (root-mean-square) amplitude of the received signal. You can see that in addition to the interference pattern, the amplitude of the input signals is decreasing – this is to account for transmission losses along the travel paths, modeled simply using a spherical spreading assumption (scaled by range).

Next, I wanted to see how this effect might manifest itself in our particular setup.  I ran the code using an approximate receiver depth of 2200 meters, a source depth varying between 0 and 100 meters, and a horizontal range of 0 to 2200 meters.  I chose the source depths based on what I think are likely depths from which a fin whale might call.  The horizontal ranges are restricted such that the incidence angles will be small(ish) – a constraint that is imposed in part to reduce ambiguity with later multipath arrivals, and in part because of the physics of converting ground motion back to an acoustic pressure level (details I won’t go into here).  The surface bounce is given a 180 phase flip (and no loss of amplitude) since the surface is treated as a perfect pressure release boundary.

Here are the results of that model:


Interference patterns for a series of horizontal ranges and source depths.


Wow, that is a lot of possible variability!  This has been just a quick little experiment, and there’s a significant possibility that I’m still doing something wonky in my code, but based on looking at examples from the literature (for example, Charif et al., 2002), it seems to be in the right ballpark.  Very interesting – this will definitely affect how I interpret my source level estimates.

Line-wrapping in Emacs

It’s been a while, but I’m back to Org-Mode again.  This time I’m using it as a convenient and simple way to draft my paper on source levels (exporting to LaTeX).  And of course, since it’s been a while, I’m constantly having to look things up that I’m certain I knew before.  Like line-wrapping.

To use line wrapping where lines are split on the spaces between words (instead of splitting at the screen width regardless of where you are in a word) is much better on the eyes.  To do that, it’s:

cmd-M Visual-Line-Mode

Hey, presto!

And if you don’t want to have to type that in every time, just put the following line in your .emacs file:

(global-visual-line-mode 1) ; 1 for on, 0 for off.

AND… because I love to see what Google Images will kick back, the exact search terms “line wrapping in emacs 23” gave me this:

To a certain friend of mine (you know who you are):  I tried using Bing, but there was nothing nearly this exciting or totally unrelated as this.  Sorry.  Next time!

Matched filtering animation

Get ready for the most boring animation, ever. EVER! (don’t say I didn’t warn you). I made this for a talk I gave for the MG&G lunch seminar a couple of weeks ago. I wanted to figure out a way to describe how a matched filter works, and found that I was doing lots of crazy hand gestures that weren’t helping me at all. Matlab animation to the rescue!

Let’s say you have your received time series, s(t), and a simulated version of the transmitted call, x(t). The cross correlator output is simply:

y(t) = \int_0^t x(\tau)s(T-t+\tau)d\tau

The function y(t) has peaks that correspond to onset times of the calls in the original time series.

In the top panel, the blue line is the received time series, and the red line is the simulated signal. Buried in the blue time series is the original signal, between seconds 2 and 3. It’s hard to see it by eye, but the matched filter plucks it right out! The black line in the second panel is the output of the cross correlation. The peak aligns exactly with the start time of the signal. Miracle!

Matched filtering video from Michelle Wray on Vimeo.

I know what you’re dying to ask me, and no, Pixar still has not been in touch.

Trying out iOS apps

So yeah, you’d think I had loads of spare time, right? Well, I don’t. I should have been reading that paper for our lab group meeting. But instead, I spent the last couple of hours making this incredibly sophisticated app. Yes! If you push that button, it says “hello world”. I know, I know. Contender for Best App 2011.   You can’t tell from the picture, but when you push the button, the “hello world” pops up.

Yay for tutorials found on the internet!


Alas, John did not think this was the best app ever.  He has high standards, you see.