Hydrophone arrays, FTW!

October 16, 2014 — 5 Comments
Figure 1. Photo of Amy and Emily deploying the array off the back deck of the R/V Ocean Starr.

Figure 1. Photo of Amy and Emily pulling in the array at the end of the day.

We do a lot of things out here on the CalCurCEAS cruise – we play cribbage, we eat cookies, we ride the stationary bicycle – but mostly we do these two things, A LOT:

1) Look for whales and dolphins.
2) Listen for whales and dolphins.

Part 1, the “Look” part, is within the realm of the visual team, who stand bravely in wind and cold and rain (and sometimes gorgeous, balmy sunshine) and scan the surface of the ocean for animals that might pop up to say hi and or take a quick breath of air.

Part 2, the “Listen” part, exists because, well, it’s pretty difficult to see what animals are doing once they dip down beneath the waves. And we all know that’s where the fun stuff happens (synchronized swimming practice? underwater tea parties? you never know!).

Since I’m on the acoustics team, I thought I’d give a little overview of how our operation works.  We eavesdrop on the animals using a pair of hydrophones, which are basically underwater microphones. If we just used one hydrophone, it would be okay, we could hear the animals. But adding a second allows us to not only hear them, but figure out where they are.

Figure 1.  Top: Array geometry, showing how we can tell which direction a sound is coming from. Bottom: Signals measured on each of the hydrophones.

Figure 2. Top: Array geometry, showing how we can tell which direction a sound is coming from. (The poor creature pictured is my sad attempt at a striped dolphin, if you were wondering) Bottom: Signals measured on each of the hydrophones.

The pair of hydrophones are built into the end of a cable that is spooled onto a winch on the back deck. Every morning we head out there, don our life jackets, hard hats, and boots, and reel out the cable until the hydrophones are 300 meters behind us. The hydrophones are spaced one meter apart, and once the vessel is up to speed, they are roughly at the same depth.

The upper part of Figure 2 (blue) shows a cartoon schematic of the hydrophone setup. Here’s what it all means:

H_1 and H_2 – hydrophones #1 and #2

Known quantities:

d_h – distance between H_1 and H_2. For our array this distance is 1 meter

c_w – measured sound speed in water (approximately 1500 m/s, but depends on temperature and salinity)

Measured/derived quantities:

\Delta t – time delay between when the signal arrives at H_1 and H_2

d' – distance associated with the time delay \Delta t, derived using c_w

Unknown quantity:

\theta – angle between the incoming ray path and the array baseline. This is what we’re solving for!

OK, that seems complicated. But feel free to ignore the math, of course. The basic idea is that depending on where the sound comes from, it can arrive at the hydrophones at different times. For example, in the image above, it hits hydrophone 1 first, and after some amount of time, it hits hydrophone 2. The signals from each of the hydrophones are sent upstairs to the acoustics lab (see bottom part of Figure 2). The call shows up at slightly different times on each of the hydrophone channels, and we can measure that time delay \Delta t very precisely.

Using the time delay \delta t and the measured sound speed c_w, we can obtain distance d' using:

d' = c_w * \Delta t

So now we’ve got a right triangle where we know the hypotenuse and one other side, and you know what that means – trigonometry time!! Everyone’s favorite time! We finally have what we need to solve for the angle \theta.

\theta = acos( \frac{d'}{d_h})

We now know what angle the dolphin is at relative to the array. Huzzah! But wait. There are just a couple of little details that you need to remember (see Figure 3). First: you don’t know how far away the dolphin is. Second: there’s this pesky thing called “left-right ambiguity” *.

doge_angles3

From the perspective of the array, there’s no difference between an animal calling from an angle \theta to the left and an animal calling from an angle \theta to the right.

Figure 3. Left-right and range ambiguity.

Figure 3. Left-right and range ambiguity.

These are fundamental limitations of the method, but we can (sort of) overcome them. As the vessel moves along, and we estimate angles at different locations, we end up with a location where most of the bearing lines intersect. If the vessel is traveling in a straight line, we can get a good idea of range – how far the animal is from the trackline. We just won’t know which side of the trackline it’s on. But if the vessel makes a turn, the new bearings estimated after the turn will resolve which side of the line it’s on!

Resolving_ambiguities_600px

Figure 4. Bearing estimates (red lines) taken at different locations along the track line. Probable location is where most of the lines intersect.

At this point you might be wondering, Michelle, what assumptions are you making when you do these calculations? So here they are:

Assumptions**:

  • The array is horizontal
  • The animals are calling at the surface
  • The animals are staying in approximately the same location for the duration of the measurements

So there you have it. That’s how we locate animals underwater with a towed, 2-element hydrophone array.

 

* Yin, one of the amazing visual observers on the cruise, thinks “Left-right ambiguity” would be a great name for a band, and I agree.

** assumptions are made to be broken

RVOceanStarr-1

Hi all! I’m on a ship called the R/V Ocean Starr, and we’re out on the 2014 California Current Cetacean and Ecosystem Assessment Survey, or CalCurCEAS for short. We are collecting data that will allow NOAA scientists to estimate the abundance of whales and dolphins off the west coast of the U.S. They’ll also be able to use these data to better understand what affects the distribution of marine mammals – where do they like to hang out, and why? We’re gathering this data using two primary methods: visual and acoustic, and are also conducting photo-ID and biopsy sampling of species of special interest.

In addition to the marine mammal portion of the survey, we’re looking at the pelagic ocean ecosystem in the region.  This means taking measurements of various properties of the water, doing net tows, and using acoustic backscatter to look at biomass in the upper part of the water column. There are also two observers onboard to survey seabirds.

I’m out here with an awesome science team and a great crew. There are two other acousticians besides me: Emily T. Griffiths and Amy Van Cise. Emily has been onboard for the first two legs. She makes sure we stay on track and don’t get into (too much) trouble. Amy is a PhD student at Scripps Institution of Oceanography studying pilot whales, and she’s here for the third leg of the cruise, just like me. The three of us all love ice cream and get along famously.

I have one or two shiny new blog posts that I’m hoping to share soon (with comics! woo!), and I might even have a couple of surprise guest posts! Stay tuned…

A couple of people have expressed interest, so here are a little tutorial describing how I created the spectrogram animation that you can find here: http://www.michw.com/2014/02/spectrogram-animation/

You should know a few things:

1) I’m not an AE expert, you guys!

2) There are a lot of ways to accomplish what I’m about to explain; this is just one way, and it may not even be the best way.

3) If anything isn’t clear, then please let me know so I can try to fix it!

4) Many thanks to Christian Petropolis, who showed me how to do this in the first place!

What you’ll need

First, let’s start with ingredients, the things you’re going to need right from the get-go to be able to start this tutorial:

– A spectrogram! Here’s mine. Nothing special – just a standard Matlab-generated spectrogram in png format. Probably a jpeg or tif or even a Photoshop file would be just fine as well. Any image that you can import to After Effects.
FinCalls-and-earthquake-NC89

– Your audio file! Again, mine has is just a wav file generated by Matlab. Make sure it’s the audio that matches your spectrogram!

– Adobe After Effects (sorry – I know it’s expensive – I managed to get a good deal on it the Adobe Creative Cloud through my University bookstore, if you’re a student or teacher you can probably find a way to get it relatively inexpensively)

Let’s get down to business

1) Open After Effects! Select File -> New -> New Project.

2) Select File -> Import -> File…

Then select the both the spectrogram image file and the accompanying audio file

3) Select both files from the Project tab and drag them onto the “Composition” icon:

startComposition

 

4) A window will open; select “Single Composition”, and set the duration at approximately the length of your audio file (don’t worry, you can change this later if it’s not right)

newComp

 

5) In the Composition tab, right-click and rename the layers if you like:

layernames

 

6) Select the image layer (in my case, it’s called “Spectro”). Use Shortcut keys (cmd-C/ctrl-C and cmd-V/ctrl-V) or the Edit pull-down menu to make two duplicates.

7) Rename the two new layers to “GlowLayer” and “DarkLayer” (or whatever you like, that’s what I’m going to call them). Make sure “GlowLayer” is on top.

newlayers

8) Click on the “GlowLayer” to make it active, select the “rectangle tool” icon, then draw a thin rectangle on the spectrogram. This will be the glowing bar that slides across as you play the sound.

threesteps

 

9) With the “GlowLayer” still selected, Click the pulldown menu Effect -> Stylize -> Glow. The Glow effect controls should pop up in their own tab. Here’s how I set it up, but do whatever works for your image. The only things I changed were Glow Threshold (22%) and Glow Intensity (1.8).

GlowSettings

10) Select Layer (pulldown) -> New -> Shape Layer, and drag that new layer to just below “DarkLayer”. I renamed my new Shape layer “darkRectangle”.

darkRectangle

11) With “darkRectangle” layer selected, click the “Rectangle tool” button. Draw a Rectangle on your spectrogram like this (see image). Click the “Fill Options” and choose “solid color” and ~60% opacity, then click  “Fill color”, and choose a medium gray.

RectProperties

12) Select “DarkLayer”, choose the “Rectangle tool”, and draw another rectangle the same way as in step 11. The difference is that this time, because of the layer type, it’s going to make a mask, not a rectangle.

13) Click the check box to “Invert mask”. You should see the dark colored rectangle show through at this point.

DarkMask

14) Now for the fun part. Make sure the “scrubber” is at time = zero. The scrubber is the little yellow triangle on the timeline that lets you move forward and backward in time.

15) Click the triangle icon beside the Mask layer under “DarkLayer” to see more options.

16) Click the little clock icon beside “Mask Path”, and you should see your first “keyframe” show up (yellow diamond). If you’re unfamiliar with After Effects, you should know that it’s *all about keyframes*. Seriously.

firstkeyframe

 

17)  Drag your keyframe scrubber to the end of your audio segment.  The audio segment is shown by a green bar on my timeline.

18) Click on the mask layer to activate it, if it’s not already active (my mask layer is called “Mask 1″ in the image above). Click the right arrow on your keyboard until the left side of the mask rectangle is lined up with the right edge of the spectrogram image. (hint: use shift+arrow to move in larger increments). Now if you move your scrubber back and forth you should see the “shadow” move across your spectrogram.

MoveMask

19) Now we’re going to animate our “GlowLayer” in much the same way. Select the GlowLayer, and again open up “Mask 1″ options.

20) Move the scrubber to zero time again. Add a Mask Path keyframe.

21) Move the scrubber to the end of the audio segment.

22) Make sure “Mask 1″ under the “GlowLayer” is selected, and move the glow bar over using the right arrow key. Stop when the right side of the rectangle mask is lined up with the right edge of the spectrogram.

23) Congratulations! You’ve animated your spectrogram. Now if you’d like to preview it, you can use the RAM preview. Just hit the little triangle on the far right of the preview panel, and there you go. It’ll run through it quickly once to render the video and audio, then you can behold your masterpiece. A little warning: if you’re computer is a bit slow, this preview can be jumpy. But once you export it, it should be just fine.

RamPreview

24) Exporting! File -> Export -> add to Render queue. Choose render settings and output location and click “Render” button. (Defaults are probably a decent place to start if you’re not sure).

I finally posted my latest video a couple of weeks ago. You can check it out here:

Sea Level Rise in Pohnpei from Michelle Wray on Vimeo.

This one was a lot of fun to make, and I learned a lot along the way. Unfortunately, there’s really not one single place to learn all of this, so I ended up watching tons of YouTube tutorials. At some point, I’d love to compile my own little set of tutorials to show others how to do this, but for now, here’s the overview:

1) Research the topic (interviews and lots of web searches mostly)

2) Storyboard: what order makes sense? What should I cover? This is a very rough draft and consists of words and some simple sketches

3) Write the script: Each paragraph is one, or maybe two “scenes”. My scripts have been about 1.5-2 pages long on MS Word (12 pt font)

4) Record script audio (I use a Blue Yeti mic)

5) Sketch out scenes in more detail, like this:

Pohnpei sketch

6) Adobe Illustrator: Draw things out using Illustrator + Wacom tablet (or mouse/trackpad). Save each section of the image as a separate layer. If I want to do the “drawing in” effect later, reverse the order of elements in each layer.

7) Adobe After Effects: Import illustrator file, along with audio. Use trim paths effect to make it look like it’s being drawn in. Synch audio and animation using keyframes.

8) Adobe Premiere Pro: Import scenes created in steps 6 & 7, put them in the right order, and add a soundtrack on top of it all.

That’s the gist of it anyway. Hope to give more details on specific sections at some point. For now, feel free to ask questions in the comments below.

Spectrogram animation

February 22, 2014 — Leave a comment

As some of you may know, I’ve been learning about video editing lately. For the most part, it revolves around the online introductory oceanography class I’m helping develop, but there are a couple of other things I’m playing around with too.

My family is visiting from Canada, and my brother-in-law, Christian, has been showing me some cool tricks in Adobe After Effects. I often use a spectrogram to help describe how we “look at” sounds. It is pretty straightforward if you’ve seen them before but can be a bit strange the first time you see one.

In the past, I’ve showed a spectrogram and then included a separate sound file to show how they work, describing how they fit together in words. But I’ve finally figured out how to pull it all together in After Effects, and it’s surprisingly easy.

This video is pretty simple and is not much more than a couple of masks and a glow effect, with (of course) strategically placed keyframes. Here it is – enjoy!

Argo Floats!!

January 26, 2014 — 2 Comments

If you have ever sat there and wondered to yourself, “What are Argo Floats? And why are they the coolest things ever?” – Look no further, my friend. I have just the (science-comics) video for you.  It’s based on an interview I did a while back with Rick Rupan, who runs the Argo Floats program at the University of Washington.

This format is a bit experimental for me, but I LOVE this style, and wanted to give it a whirl. I sort of just jumped in, and because I was (am!) so very clueless, I asked basically everyone I know (and some people who I don’t know) to give me feedback. The suggestions started rolling in, and I tried to incorporate as many as I could.

I really hope you enjoy this, and if I’m lucky, you’ll even learn something along the way :-) I do plan to make more of these, so if you have any feedback for me after watching this, it would be a great help.

Okay, enough babbling. Go ahead and watch the video already!

 

 

 

Doodle videos, heck yeah!

January 23, 2014 — 3 Comments

My latest project is figuring out how to make a doodle video. Or a speed drawing animation. I don’t actually know what they’re called, but you’ll get the idea if you check out the videos below. I started off attempting to use my iphone. Setting the iphone up to record was… awkward:

completely professional.

I did my video this way, posted it to youtube, and got a bunch of feedback from my awesome Facebook and Twitter friends. I tried to take that feedback into account as best I could and eventually moved on to my Nikon D5100 (dSLR).  I even set it up on an actual tripod (Thanks, Andrew Shao!).

slrvideo

Recording on my SLR was pretty easy too, once I got it adjusted. I did have some issues with focusing – not that it was hard, I just kept forgetting to check. oops. Editing for the SLR video was done in iMovie, and the audio was recorded in Hindenburg Journalist. Here’s a 30-second sample from this experiment. I hope that you agree that it has improved, at least marginally, from my first attempt!

Next, I decided to try doing the video using Camtasia. Luckily they have a free trial, so I gave that a whirl. Camtasia is for screen recording and video editing, and was really easy to jump into and start using right away. I did the drawings in Adobe Photoshop using my Wacom Intuos4 tablet/stylus. Here’s the 30-second sample using the “all-digital” method:

Dear reader, if you have been patient and kind enough to actually read all this, and more importantly, watch the two videos, please feel free to give me feedback in the comments (below) or in email or on Twitter or Facebook!

According to John, the best thing about Sharknado is that they drive around in a Landcruiser, quite a lot like this one:

LandcruiserMtBakerCloseup-1

John’s 1987 Toyota Landcruiser

If you know John at all, you know that he LOVES that truck, and spends large chunks of his free time fixing it, taking it apart, restoring it, replacing parts, etc.

Here’s a sample of his Sharknado commentary:

sharknado_landcruiser

The commentary continued even after the Landcruiser was obliterated, even if only to berate all other vehicles that turned up in the movie. Especially Hummers.

sharknado_comic2

All in all, despite the filmmaker’s poor decision to destroy the world’s best truck, Sharknado was as epic as we ever could have hoped for in our wildest dreams. We encourage you all to enjoy it this holiday season.

sharknado_poster

It started with “drunk” pelicans. While studying aquatic sciences at the University of Santa Barbara, Liz Tobin noticed that there was something wrong with the fish-gulping birds.  They exhibited unusual behavior, getting hit by cars, and sometimes simply falling out of the sky. The pelicans suffered from seizures induced by domoic acid poisoning, a particularly dangerous affliction when it hit mid-flight.  Biologists traced the source of the poisoning to rapid springtime growth (bloom) of Pseudo-nitzschia, a tiny, single-celled organism, distantly related to the giant kelps thriving in nearby coastal waters.  Small fish and shellfish consume these algae, and although they are not harmed, they store the toxins produced by algal cells and pass them on to their predators, including seabirds and marine mammals.

Scientists refer to these incidents as harmful algal blooms, or HABs.  They are not restricted to the waters of southern California, though.  And Pseudo-nitzschia cells are not the only culprits.  There are a number of other algal species that can cause trouble for local marine ecosystems. HABs occur in coastal waters around the globe, with larger and more frequent blooms being linked to a warming sea surface [1] and also to increased nutrient runoff from land [2], which is a common by-product of animal or plant agriculture.  Local economies suffer due to tourism losses and local residents can’t harvest shellfish from their beaches.

After wrapping up a bachelor’s degree at UCSB, Liz moved on to graduate school at the University of Washington so that she could study harmful algal species in Puget Sound.  One of the species she’s interested in is Alexandrium catanella, another single-celled marine alga that produces a suite of deadly neurotoxins causing paralytic shellfish poisoning, or PSP.  Alexandrium blooms can become so intense that they give the water a reddish-brown tinge, often called a “red tide” (although they don’t always cause water discoloration).

Oysters

If you eat a shellfish that has accumulated Alexandrium toxins, a series of unpleasant events unfolds. First, your fingers and toes will tingle and your lips will feel numb.  Next, you may become nauseous and unsteady on your feet.  If the toxins hit you at full force, your diaphragm will become paralyzed, which means that you are unable to pull oxygen into your lungs.  Without prolonged artificial respiration, a severe case of PSP can lead to death in a matter of hours.

In recent years, poisoning cases in Washington State have been rare, but they do happen. Washington Department of Health aims to prevent outbreaks by regularly testing shellfish along Puget Sound beaches.  Commercially harvested shellfish are tested exhaustively before hitting the markets. Unfortunately, it’s tough to predict the severity, location, and timing of an Alexandrium bloom ahead of time. That’s where Liz’s work fits in: she’d like to improve our ability to forecast where those blooms will happen ahead of time so that public health officials, fisheries managers, and the shellfish industry have more time to react.

LifeCycle

Alexandrium cells have a comfort zone, she explains.  They grow and divide furiously when the days are long and the surface waters are warm.  Once the spring bloom ramps up, zooplankton, fish, and shellfish feed on the algae, eventually decimating Alexandrium populations by early to mid summer.  Less Alexandrium cells means less food for their predators, and their predators begin to die off or find food elsewhere.  By late August, with newly lowered predation pressures and plenty of sunlight, a second bloom typically occurs.  As the season draws to a close, Alexandrium  algal cells feel the effects of shorter days and less sunlight.  They struggle to grow and divide and at some point cut their losses and transit down to the sediment to wait out winter.  -Settled into the muddy seafloor, they shift to survival mode, also known as the “cyst” stage.  The exact conditions that send them into their cyst stage are poorly understood, as are  the details of how they wake up and make their way to the surface come springtime.

Liz is particularly interested in how the Alexandrium cells swim, even though they are mostly at the whim of the currents.  Tides rush in and out twice each day, racing through narrows and washing languidly over mud flats.  Rivers pour into Puget Sound with irregular pulses of freshwater from rain and springtime snow melt. Alexandrium algal cells propel themselves through the water using a small whip-like appendage called a flagellum, but because of their tiny size – only about a third of the width of a single human hair – they can’t overcome even the weakest currents. However, if Liz knows their vertical swimming capabilities, she can better understand how quickly Alexandrium cells migrate from the seafloor to the surface (where blooms occur) and back again.

A lot of researchers, Liz included, study Alexandrium in the lab.  They blend solutions of water and various chemicals to imitate seawater, and observe how the cells behave and react to a variety of environmental triggers.  One of the labs she works in is a tiny bunker-like space on the second floor of the Ocean Teaching Building.  Two long black tables running along either side of the room are covered by a hodge-podge of beakers, elaborate video recording setups, and circuit boards sprouting nests of red and white wires.  At the back of the room, a giant insulated door opens to a fully programmable walk-in refrigerator, where algal cells can be subjected to controlled shifts in temperature and light.

Even at their best, however, lab experiments can’t possibly recreate the many complexities that exist in the natural world.  Liz knew that if she could monitor the emergence of the Alexandrium cells from the sediments she could combine those observations with detailed fluid flow models to better predict where the cells would eventually concentrate.  The only problem was that there were no “off-the-shelf” instruments that were capable of doing what she wanted.  So she set out to build her own.

soldering

When she starts describing the instrument she’s working on, I interrupt.  Pushing my notebook and pen across the table, I ask her to sketch her design for me.  She draws a tube a few inches in diameter, sitting upright on the seabed.  On one side of the tube a camera in waterproof housing points into a tiny window to capture any Alexandrium cells swimming past.  The end of a plankton trap sits on top, ready to catch any upward-swimming cells so they can be later compared with the video. She pauses her drawing to look up at me.  “It’s just like what we would do in the lab,” she says, “but instead of a camera looking at a tank, I have a camera looking into a chamber on the seafloor”.

As of mid-July 2013, Liz’s prototype of her seafloor camera was nearly ready to be deployed for testing.  If it works, she’ll try to capture Alexandrium cells emerging from the seafloor just prior to the late-summer bloom.  Her vision for the future, she tells me, is a network of these instruments deployed across Puget Sound, monitoring cyst emergence in real time.  Instead of only knowing about blooms once they are in full swing, they could be predicted days or weeks in advance, avoiding unnecessary closures and potentially saving lives.

[1] Climate change and harmful algal blooms, NCCOS http://www.cop.noaa.gov/stressors/extremeevents/hab/current/CC_habs.aspx

[2] Anderson, Donald M., Patricia M. Glibert, and Joann M. Burkholder. “Harmful algal blooms and eutrophication: nutrient sources, composition, and consequences.” Estuaries 25, no. 4 (2002): 704-726.
http://www.whoi.edu/fileserver.do?id=47044&pt=2&p=28251

 

Teaching and tweeting

September 26, 2013 — Leave a comment

MOR tweet

It’s been a while since I’ve been a TA in the traditional sense – you know, sitting in class, running and grading labs, answering questions from inquisitive minds (or at least referring them to smarter people than myself)… So I’m pretty excited to be back at it this quarter. Yesterday was our very first class and today is our first lab – huzzah!

I decided yesterday, somewhat on a whim, to take the class to Twitter – I’ll be tweeting class-related goodies using the hashtag #Ocean410TA, in case you want to follow along – whether you’re in the class or not. I’d be stoked to hear from people who are just curious about what we’re doing. The class is a senior level (4th year for you Canadian friends) marine geology and geophysics class – we basically learn about how the ocean basins form, and why the look like they do. Underwater earthquakes and volcanoes! The coolest.

Has anyone out there tweeted a class before? Any advice or thoughts?

And in case you’re wondering why scientists should tweet, check out this blog post on the AGU website and another one on the Deep Sea News site.