For a while there, it seemed like I was going to be a graduate student forever…then I graduated and got a job. And then I decided to go *back* to grad school (oh, Michelle…). But then, in December 2016, I successfully defended my dissertation – Huzzah! (I like to tell my nieces I made it to grade 22, how funny/terrifying!) Then about a month and a half after wrapping up my PhD, I started working (remotely) for JASCO Applied Sciences. (oh, and a couple of months after that I had a baby.)

JASCO is a company that does consulting and research for assessing and mitigating underwater noise. They sort of do it all – they design and build super cool underwater acoustic sensors, install those sensors and collect data all around the world, often in remote and dangerous locations. They measure sounds produced by marine animals like whales, dolphins, seals, fish, crabs… basically if it makes a sound and lives underwater, JASCO is probably gonna record it at some point. They also record sounds from noise sources like ships and seismic experiments. Then once all those data are collected, JASCO scientists crunch through it – signal processing, acoustic propagation modeling, interpretation, whatever needs to be done to understand what’s happening in the world of underwater sound.

My job at JASCO is a blend of things – data analysis and visualization, but also education and outreach. Here I am at my home-office, working on a comic about marine noise, and how we can measure it:

## Family

First non-science-y post in a while…

## Simple Fourier Transform demo

(Go here to access the interactive visualization: https://michellejw.github.io/FTviz/)

This little project has kept me entertained over the holidays! I thought it would be useful to come up with a really simple interactive web visualization to illustrate what a Fourier Transform is. And of course, it’s also been a great way for me to get some javascript practice.

It’s a really simple example, and could definitely be improved – with the ability to add more signals, for example. And design improvements.

If you want to check out the code, it’s up on my Github page. I decided to try out Plotly for the graphs, and used bootstrap for the layout.

## Wave interference

I’m TAing a marine GIS/ocean mapping course this quarter, and will be teaching a lecture on some aspects of multibeam sonars and the data that you get from them. Which is fun since I haven’t really thought much about multibeam sonars since I worked at a sonar company about 6 years ago.

I was trying to figure out how to explain how a longer array means you can get a narrow beam. It’s all about interference patterns, right? So I wrote a little script in Python (you can access the script directly on my Github page).

Let’s say you have two elements spaced a half wavelength apart. You get something like this, with one main lobe:

Cool – there’s just one main lobe of higher amplitude. But then if you pull those elements just a bit further apart – I’m showing a 5 wavelength separation here – you can see a completely different pattern:

So: the wider separation made more beams and and those beams were narrower. Interesting…. What if we wanted to have just one main beam that we could maybe steer around? (ahem. maybe a little like a multibeam sonar??) The next picture shows a single beam produced by a line of 20 elements, all spaced at half a wavelength apart from each other. This time we’re zooming out a bit – showing 30m x 30m this time. Also this simulation shows what it would look like with a 200 kHz signal – which is a pretty common frequency for a shallow water multibeam sonar.

After about two frenzied weeks of muddling through javascript and D3 (and html and css) Helena and I managed to wrap up what I think is a pretty neat D3 visualization of oceanographic data for the class we’re taking.

Check it out here:  http://cse512-16s.github.io/a3-michellejw-hmvantol/ – just click on the map to see the temperature and salinity profiles. And if you’d like to see what’s going on under the hood (or re-create it from scratch), the whole thing is up on our github page (along with a fairly detailed readme file describing our process).

Helena has done some neat D3 stuff before – check it out on her site here.

## I’m back! Maybe?

Let’s see, it looks like my last post was in November 2014. What can I say? Sometimes life gets crazy for a bit. It’s still crazy now, but I’m excited about some things that are happening. For example: I’m nearing the end of my PhD! I’m tentatively planning to defend in the next few months. Also, I’m taking a really fun (and very intense) data visualization class. So far I’ve had a chance to play around with Tableau and Trifacta Wrangler. But the biggest learning curve, but with perhaps the biggest reward (ie. awesome interactive web graphics) is D3. I’m hoping to post some stuff as I learn…

For today, I’m not making my own comic. But Helena sent me a link to this xkcd comic that captures my experience with git so far.

## Making a comic: Whale Scout edition

I’ve been working on a little comic for WhaleScout.org. It’s taken me a while, but it’s finally done, and will be up on the Whale Scout website soon! Woo!

I used the iPad app Procreate, which I’ve been a fan of for a while now. Years, maybe? I loved it from the get-go, and it really just keeps getting better. I’ve tried a lot of different drawing/painting apps, and this one is by far the best. Highly recommend!  As for ancilliary tools: I used an iPad 3, a Bamboo stylus, and Premiere Pro for some minor editing. I might as well also reveal that I got my best work done while sitting on the couch with Modern Family playing in the background. Don’t question the method people!

One of the features in the Procreate app allows you to export a video that shows the progression of your artwork. I’m actually not sure anyone other than me will be even vaguely interested in watching it, but here it is anyway.

Making a comic: Whale Scout! from Michelle Wray on Vimeo. Music: Something Elated (Broke For Free) / CC BY 3.0

## Guest post: Missing when you’re shooting at trouble

I am so very pleased to introduce my very first guest blogger: Emily T. Griffiths. I worked with Emily on the CalCurCEAS cruise earlier this month. We had adventures and misadventures and cookies. Lots of cookies. Emily’s background is in zoology and animal biology (undergrad) and marine biology (master’s). She worked as a research technician at the American Museum of Natural History and as a research analyst at Cornell’s Lab of Ornithology. For the last year, she’s been working with NOAA’s Southwest Fisheries Science Center as wildlife research biologist. And on top of that she is super awesome to hang out with. Just sayin’.

In the last post Michelle went over how we use towed hydrophone arrays to localize and track vocalizing marine mammals as the survey vessel moseys down the transect line. Historically, visual surveys have been the meat and potatoes of marine mammal survey estimates. However, adding an acoustic component to marine mammal surveys can greatly improve the chances of detecting species presence. Both methods have their strengths and weaknesses. The visual team can only see an animal if they come up for a breath, while acoustics can only detect an animal if they decide to clear their throat. That’s why the combination of both is key. In the case of sperm whales, for example, when you combine visual and acoustic efforts on surveys the science team has a 92% chance of finding an animal if they are in the study area. Not too shabby.

But here’s the thing: hydrophone arrays break, like, all of the time. The arrays I’ve been sent out with on CalCurCEAS are the current result of more than ten years of research and development, but they are still a work-in-progress. We are constantly making changes and improvements, or fixing issues that seemingly arise just because too much time has passed since the last issue. I’m on this ship with most of the tools I would need if an array did stop working and, in a pinch, crack open that sucker and attempt to fix it.

This is the reality when you’re depending on high-tech equipment to support your research: it’ll break so often that you’ll become convinced it’s because you looked at it funny. Or because it’s afraid of water. Or it just wanted to be held. All of these considerations about inanimate machinery parade through your mind as you’re running through your fifth deck test because the damn thing works in the lab but not in the open air.

Mechanical frustrations are not limited to hydrophone arrays. Oh no. Fresh out of college, I got a job in a academic research-based microscopy lab. I was charged with learning the ins and outs of the new environmental scanning election microscope, or ESEM, and the thing was a lemon. ESEMs were cutting edge because the specimen wasn’t destroyed in the imaging process, a hot ticket item when you want to look at the finer details of fossilized rodent teeth. Which we totally did. However, ‘cutting edge’ is frequently synonymous with ‘high-functioning prototype’. I spent my first six months crawling around that machine, guided by the manufacture’s technician because I had a smaller frame and fingers. A machine based on infant technology cannot really be returned, many times they’re made to order. If you did manage to return it, the replacement could have the same, or even woefully different issues. This is a clear ‘the devil you know’ situation, and though the ESEM was never perfect, after a time not only could I mostly fix it by myself, I also got to take some really amazing pictures in the name of research. Like of the dinosaur bone I broke, but that’s a different story.

Later in my career, I spent a year assisting in field tests of an autonomous biological recording device designed to be half the weight and cost of the current leader in the market. Rather than discuss the boring troubleshooting details, here’s a list of notes without context from my notebook to represent the plethora of issues we were dealing with. These notes are presented in the order that I find funniest with my [comments]:

• Good ideas save issues. [Brilliant.]

• The LED indicators → What are they? [I was having a slow day, let’s hope.]

• Red hat fifth model, under-ware. [I have no actual idea what this note means.]

• High power rifles are not common. [Useful.]

• Analysis plan → bunch of SD cards in the mail, now what? [I’m a highly trained scientist.]

• A doodle, after several failed field tests, of a tombstone inscribed with “She tried.” *

Towing a hydrophone array behind a ship is a silly idea for many reasons, outside of pulling a delicate instrument underwater at 10 knots while powering it with electricity and expecting not to die. Ships are incredibly noisy, and they interfere with our data collection. On CalCurCEAS we are deploying working autonomous free-floating recording prototypes known as DASBRs (Drifting Acoustic Spar Buoy Recorder). It’s very exciting technology. These devices have a higher chance of recording animals which are ship-shy (and therefore are little known), can record in deep waters (current bottom mounted devices are limited to the continental shelf), and allow us researchers to get a better understanding of ambient noise levels in the ocean.

I’ve been working on the DASBRs for about a year now, and though we’ve certainly had successful deployments, we’ve also had our fair share of “learning experiences.” Which is expected when you’re trying to do something new. In the field you make good with what you got because you can’t anticipate everything, you can only try. Sometimes this means spending hours trying to figure out why an instrument can’t hold a charge (only to realize you didn’t realize the plug-in charger has an on/off switch). Sometimes this means using a clamp and a thumbtack to replace a screw in a hole so small, narrow and deep it’s like throwing a dart and hitting a bull’s eye 6 thousand miles away (I am a leaf on the wind) **. Sometimes this means duct taping tennis balls to the bottom of a chair to prevent the chair from moving around because you’re bored and annoyed at the ocean swell pushing you and your chair around.

Even though it’s frequently discouraging and deadening, the embarrassing truth is that troubleshooting is one of my favorite parts of the job. Yes, you miss more than you hit when trying to find the origin of an issue, and by the time you hit you’re so exhausted that it’s more elating to chuck the equipment aside for the day than it is that the problem is solved. Besides you know, in your heart of hearts, that it isn’t. It’s just lurking. However, I still love a good puzzle and learning new things. Sometimes I learn things that make my job easier. Sometimes I learn that I’m really crappy at troubleshooting. It’s all valuable and puts me in a better position the next time something breaks. Because it’s not a question of if, only when.

* To my credit, I read Breakfast of Champions after making this doodle, and was simultaneously thrilled that I had organically made a joke that Vonnegut had also made, but disappointed that I couldn’t really claim it as my own.

** I know, I know. This is totally shoehorned in here.  I may not be the Wash, but I’ll tell you what, it’s shiny.

## Hydrophone arrays, FTW!

We do a lot of things out here on the CalCurCEAS cruise – we play cribbage, we eat cookies, we ride the stationary bicycle – but mostly we do these two things, A LOT:

1) Look for whales and dolphins.
2) Listen for whales and dolphins.

Part 1, the “Look” part, is within the realm of the visual team, who stand bravely in wind and cold and rain (and sometimes gorgeous, balmy sunshine) and scan the surface of the ocean for animals that might pop up to say hi and or take a quick breath of air.

Part 2, the “Listen” part, exists because, well, it’s pretty difficult to see what animals are doing once they dip down beneath the waves. And we all know that’s where the fun stuff happens (synchronized swimming practice? underwater tea parties? you never know!).

Since I’m on the acoustics team, I thought I’d give a little overview of how our operation works.  We eavesdrop on the animals using a pair of hydrophones, which are basically underwater microphones. If we just used one hydrophone, it would be okay, we could hear the animals. But adding a second allows us to not only hear them, but figure out where they are.

The pair of hydrophones are built into the end of a cable that is spooled onto a winch on the back deck. Every morning we head out there, don our life jackets, hard hats, and boots, and reel out the cable until the hydrophones are 300 meters behind us. The hydrophones are spaced one meter apart, and once the vessel is up to speed, they are roughly at the same depth.

The upper part of Figure 2 (blue) shows a cartoon schematic of the hydrophone setup. Here’s what it all means:

$H_1$ and $H_2$ – hydrophones #1 and #2

Known quantities:

$d_h$ – distance between $H_1$ and $H_2$. For our array this distance is 1 meter

$c_w$ – measured sound speed in water (approximately 1500 m/s, but depends on temperature and salinity)

Measured/derived quantities:

$\Delta t$ – time delay between when the signal arrives at $H_1$ and $H_2$

$d'$ – distance associated with the time delay $\Delta t$, derived using $c_w$

Unknown quantity:

$\theta$ – angle between the incoming ray path and the array baseline. This is what we’re solving for!

OK, that seems complicated. But feel free to ignore the math, of course. The basic idea is that depending on where the sound comes from, it can arrive at the hydrophones at different times. For example, in the image above, it hits hydrophone 1 first, and after some amount of time, it hits hydrophone 2. The signals from each of the hydrophones are sent upstairs to the acoustics lab (see bottom part of Figure 2). The call shows up at slightly different times on each of the hydrophone channels, and we can measure that time delay $\Delta t$ very precisely.

Using the time delay $\delta t$ and the measured sound speed $c_w$, we can obtain distance $d'$ using:

$d' = c_w * \Delta t$

So now we’ve got a right triangle where we know the hypotenuse and one other side, and you know what that means – trigonometry time!! Everyone’s favorite time! We finally have what we need to solve for the angle $\theta$.

$\theta = acos( \frac{d'}{d_h})$

We now know what angle the dolphin is at relative to the array. Huzzah! But wait. There are just a couple of little details that you need to remember (see Figure 3). First: you don’t know how far away the dolphin is. Second: there’s this pesky thing called “left-right ambiguity” *.

From the perspective of the array, there’s no difference between an animal calling from an angle $\theta$ to the left and an animal calling from an angle $\theta$ to the right.

These are fundamental limitations of the method, but we can (sort of) overcome them. As the vessel moves along, and we estimate angles at different locations, we end up with a location where most of the bearing lines intersect. If the vessel is traveling in a straight line, we can get a good idea of range – how far the animal is from the trackline. We just won’t know which side of the trackline it’s on. But if the vessel makes a turn, the new bearings estimated after the turn will resolve which side of the line it’s on!

At this point you might be wondering, Michelle, what assumptions are you making when you do these calculations? So here they are:

Assumptions**:

• The array is horizontal
• The animals are calling at the surface
• The animals are staying in approximately the same location for the duration of the measurements

So there you have it. That’s how we locate animals underwater with a towed, 2-element hydrophone array.

* Yin, one of the amazing visual observers on the cruise, thinks “Left-right ambiguity” would be a great name for a band, and I agree.

** assumptions are made to be broken

## Whales, dolphins, and seabirds, oh my!

Hi all! I’m on a ship called the R/V Ocean Starr, and we’re out on the 2014 California Current Cetacean and Ecosystem Assessment Survey, or CalCurCEAS for short. We are collecting data that will allow NOAA scientists to estimate the abundance of whales and dolphins off the west coast of the U.S. They’ll also be able to use these data to better understand what affects the distribution of marine mammals – where do they like to hang out, and why? We’re gathering this data using two primary methods: visual and acoustic, and are also conducting photo-ID and biopsy sampling of species of special interest.

In addition to the marine mammal portion of the survey, we’re looking at the pelagic ocean ecosystem in the region.  This means taking measurements of various properties of the water, doing net tows, and using acoustic backscatter to look at biomass in the upper part of the water column. There are also two observers onboard to survey seabirds.

I’m out here with an awesome science team and a great crew. There are two other acousticians besides me: Emily T. Griffiths and Amy Van Cise. Emily has been onboard for the first two legs. She makes sure we stay on track and don’t get into (too much) trouble. Amy is a PhD student at Scripps Institution of Oceanography studying pilot whales, and she’s here for the third leg of the cruise, just like me. The three of us all love ice cream and get along famously.

I have one or two shiny new blog posts that I’m hoping to share soon (with comics! woo!), and I might even have a couple of surprise guest posts! Stay tuned…