Guest post: Missing when you’re shooting at trouble


I am so very pleased to introduce my very first guest blogger: Emily T. Griffiths. I worked with Emily on the CalCurCEAS cruise earlier this month. We had adventures and misadventures and cookies. Lots of cookies. Emily’s background is in zoology and animal biology (undergrad) and marine biology (master’s). She worked as a research technician at the American Museum of Natural History and as a research analyst at Cornell’s Lab of Ornithology. For the last year, she’s been working with NOAA’s Southwest Fisheries Science Center as wildlife research biologist. And on top of that she is super awesome to hang out with. Just sayin’.

In the last post Michelle went over how we use towed hydrophone arrays to localize and track vocalizing marine mammals as the survey vessel moseys down the transect line. Historically, visual surveys have been the meat and potatoes of marine mammal survey estimates. However, adding an acoustic component to marine mammal surveys can greatly improve the chances of detecting species presence. Both methods have their strengths and weaknesses. The visual team can only see an animal if they come up for a breath, while acoustics can only detect an animal if they decide to clear their throat. That’s why the combination of both is key. In the case of sperm whales, for example, when you combine visual and acoustic efforts on surveys the science team has a 92% chance of finding an animal if they are in the study area. Not too shabby.

But here’s the thing: hydrophone arrays break, like, all of the time. The arrays I’ve been sent out with on CalCurCEAS are the current result of more than ten years of research and development, but they are still a work-in-progress. We are constantly making changes and improvements, or fixing issues that seemingly arise just because too much time has passed since the last issue. I’m on this ship with most of the tools I would need if an array did stop working and, in a pinch, crack open that sucker and attempt to fix it.

This is the reality when you’re depending on high-tech equipment to support your research: it’ll break so often that you’ll become convinced it’s because you looked at it funny. Or because it’s afraid of water. Or it just wanted to be held. All of these considerations about inanimate machinery parade through your mind as you’re running through your fifth deck test because the damn thing works in the lab but not in the open air.

Mechanical frustrations are not limited to hydrophone arrays. Oh no. Fresh out of college, I got a job in a academic research-based microscopy lab. I was charged with learning the ins and outs of the new environmental scanning election microscope, or ESEM, and the thing was a lemon. ESEMs were cutting edge because the specimen wasn’t destroyed in the imaging process, a hot ticket item when you want to look at the finer details of fossilized rodent teeth. Which we totally did. However, ‘cutting edge’ is frequently synonymous with ‘high-functioning prototype’. I spent my first six months crawling around that machine, guided by the manufacture’s technician because I had a smaller frame and fingers. A machine based on infant technology cannot really be returned, many times they’re made to order. If you did manage to return it, the replacement could have the same, or even woefully different issues. This is a clear ‘the devil you know’ situation, and though the ESEM was never perfect, after a time not only could I mostly fix it by myself, I also got to take some really amazing pictures in the name of research. Like of the dinosaur bone I broke, but that’s a different story.

Later in my career, I spent a year assisting in field tests of an autonomous biological recording device designed to be half the weight and cost of the current leader in the market. Rather than discuss the boring troubleshooting details, here’s a list of notes without context from my notebook to represent the plethora of issues we were dealing with. These notes are presented in the order that I find funniest with my [comments]:

  • We need access to all. [All your base are belong to us.]

  • Good ideas save issues. [Brilliant.]

  • The LED indicators → What are they? [I was having a slow day, let’s hope.]

  • Buy batteries, ask Anne how. [Do I not know how to buy batteries?]

  • Adam = Rob

  • Red hat fifth model, under-ware. [I have no actual idea what this note means.]

  • High power rifles are not common. [Useful.]

  • Analysis plan → bunch of SD cards in the mail, now what? [I’m a highly trained scientist.]

  • A doodle, after several failed field tests, of a tombstone inscribed with “She tried.” *

Towing a hydrophone array behind a ship is a silly idea for many reasons, outside of pulling a delicate instrument underwater at 10 knots while powering it with electricity and expecting not to die. Ships are incredibly noisy, and they interfere with our data collection. On CalCurCEAS we are deploying working autonomous free-floating recording prototypes known as DASBRs (Drifting Acoustic Spar Buoy Recorder). It’s very exciting technology. These devices have a higher chance of recording animals which are ship-shy (and therefore are little known), can record in deep waters (current bottom mounted devices are limited to the continental shelf), and allow us researchers to get a better understanding of ambient noise levels in the ocean.

DASBR glamor shot, on deck, immediately prior to deployment.
DASBR glamor shot, on deck, immediately prior to deployment.

I’ve been working on the DASBRs for about a year now, and though we’ve certainly had successful deployments, we’ve also had our fair share of “learning experiences.” Which is expected when you’re trying to do something new. In the field you make good with what you got because you can’t anticipate everything, you can only try. Sometimes this means spending hours trying to figure out why an instrument can’t hold a charge (only to realize you didn’t realize the plug-in charger has an on/off switch). Sometimes this means using a clamp and a thumbtack to replace a screw in a hole so small, narrow and deep it’s like throwing a dart and hitting a bull’s eye 6 thousand miles away (I am a leaf on the wind) **. Sometimes this means duct taping tennis balls to the bottom of a chair to prevent the chair from moving around because you’re bored and annoyed at the ocean swell pushing you and your chair around.


Even though it’s frequently discouraging and deadening, the embarrassing truth is that troubleshooting is one of my favorite parts of the job. Yes, you miss more than you hit when trying to find the origin of an issue, and by the time you hit you’re so exhausted that it’s more elating to chuck the equipment aside for the day than it is that the problem is solved. Besides you know, in your heart of hearts, that it isn’t. It’s just lurking. However, I still love a good puzzle and learning new things. Sometimes I learn things that make my job easier. Sometimes I learn that I’m really crappy at troubleshooting. It’s all valuable and puts me in a better position the next time something breaks. Because it’s not a question of if, only when.

* To my credit, I read Breakfast of Champions after making this doodle, and was simultaneously thrilled that I had organically made a joke that Vonnegut had also made, but disappointed that I couldn’t really claim it as my own.

** I know, I know. This is totally shoehorned in here.  I may not be the Wash, but I’ll tell you what, it’s shiny.

Hydrophone arrays, FTW!

Figure 1. Photo of Amy and Emily deploying the array off the back deck of the R/V Ocean Starr.
Figure 1. Photo of Amy and Emily pulling in the array at the end of the day.

We do a lot of things out here on the CalCurCEAS cruise – we play cribbage, we eat cookies, we ride the stationary bicycle – but mostly we do these two things, A LOT:

1) Look for whales and dolphins.
2) Listen for whales and dolphins.

Part 1, the “Look” part, is within the realm of the visual team, who stand bravely in wind and cold and rain (and sometimes gorgeous, balmy sunshine) and scan the surface of the ocean for animals that might pop up to say hi and or take a quick breath of air.

Part 2, the “Listen” part, exists because, well, it’s pretty difficult to see what animals are doing once they dip down beneath the waves. And we all know that’s where the fun stuff happens (synchronized swimming practice? underwater tea parties? you never know!).

Since I’m on the acoustics team, I thought I’d give a little overview of how our operation works.  We eavesdrop on the animals using a pair of hydrophones, which are basically underwater microphones. If we just used one hydrophone, it would be okay, we could hear the animals. But adding a second allows us to not only hear them, but figure out where they are.

Figure 1.  Top: Array geometry, showing how we can tell which direction a sound is coming from. Bottom: Signals measured on each of the hydrophones.
Figure 2. Top: Array geometry, showing how we can tell which direction a sound is coming from. (The poor creature pictured is my sad attempt at a striped dolphin, if you were wondering) Bottom: Signals measured on each of the hydrophones.

The pair of hydrophones are built into the end of a cable that is spooled onto a winch on the back deck. Every morning we head out there, don our life jackets, hard hats, and boots, and reel out the cable until the hydrophones are 300 meters behind us. The hydrophones are spaced one meter apart, and once the vessel is up to speed, they are roughly at the same depth.

The upper part of Figure 2 (blue) shows a cartoon schematic of the hydrophone setup. Here’s what it all means:

H_1 and H_2 – hydrophones #1 and #2

Known quantities:

d_h – distance between H_1 and H_2. For our array this distance is 1 meter

c_w – measured sound speed in water (approximately 1500 m/s, but depends on temperature and salinity)

Measured/derived quantities:

\Delta t – time delay between when the signal arrives at H_1 and H_2

d' – distance associated with the time delay \Delta t, derived using c_w

Unknown quantity:

\theta – angle between the incoming ray path and the array baseline. This is what we’re solving for!

OK, that seems complicated. But feel free to ignore the math, of course. The basic idea is that depending on where the sound comes from, it can arrive at the hydrophones at different times. For example, in the image above, it hits hydrophone 1 first, and after some amount of time, it hits hydrophone 2. The signals from each of the hydrophones are sent upstairs to the acoustics lab (see bottom part of Figure 2). The call shows up at slightly different times on each of the hydrophone channels, and we can measure that time delay \Delta t very precisely.

Using the time delay \delta t and the measured sound speed c_w, we can obtain distance d' using:

d' = c_w * \Delta t

So now we’ve got a right triangle where we know the hypotenuse and one other side, and you know what that means – trigonometry time!! Everyone’s favorite time! We finally have what we need to solve for the angle \theta.

\theta = acos( \frac{d'}{d_h})

We now know what angle the dolphin is at relative to the array. Huzzah! But wait. There are just a couple of little details that you need to remember (see Figure 3). First: you don’t know how far away the dolphin is. Second: there’s this pesky thing called “left-right ambiguity” *.


From the perspective of the array, there’s no difference between an animal calling from an angle \theta to the left and an animal calling from an angle \theta to the right.

Figure 3. Left-right and range ambiguity.
Figure 3. Left-right and range ambiguity.

These are fundamental limitations of the method, but we can (sort of) overcome them. As the vessel moves along, and we estimate angles at different locations, we end up with a location where most of the bearing lines intersect. If the vessel is traveling in a straight line, we can get a good idea of range – how far the animal is from the trackline. We just won’t know which side of the trackline it’s on. But if the vessel makes a turn, the new bearings estimated after the turn will resolve which side of the line it’s on!

Figure 4. Bearing estimates (red lines) taken at different locations along the track line. Probable location is where most of the lines intersect.

At this point you might be wondering, Michelle, what assumptions are you making when you do these calculations? So here they are:


  • The array is horizontal
  • The animals are calling at the surface
  • The animals are staying in approximately the same location for the duration of the measurements

So there you have it. That’s how we locate animals underwater with a towed, 2-element hydrophone array.


* Yin, one of the amazing visual observers on the cruise, thinks “Left-right ambiguity” would be a great name for a band, and I agree.

** assumptions are made to be broken

Whales, dolphins, and seabirds, oh my!


Hi all! I’m on a ship called the R/V Ocean Starr, and we’re out on the 2014 California Current Cetacean and Ecosystem Assessment Survey, or CalCurCEAS for short. We are collecting data that will allow NOAA scientists to estimate the abundance of whales and dolphins off the west coast of the U.S. They’ll also be able to use these data to better understand what affects the distribution of marine mammals – where do they like to hang out, and why? We’re gathering this data using two primary methods: visual and acoustic, and are also conducting photo-ID and biopsy sampling of species of special interest.

In addition to the marine mammal portion of the survey, we’re looking at the pelagic ocean ecosystem in the region.  This means taking measurements of various properties of the water, doing net tows, and using acoustic backscatter to look at biomass in the upper part of the water column. There are also two observers onboard to survey seabirds.

I’m out here with an awesome science team and a great crew. There are two other acousticians besides me: Emily T. Griffiths and Amy Van Cise. Emily has been onboard for the first two legs. She makes sure we stay on track and don’t get into (too much) trouble. Amy is a PhD student at Scripps Institution of Oceanography studying pilot whales, and she’s here for the third leg of the cruise, just like me. The three of us all love ice cream and get along famously.

I have one or two shiny new blog posts that I’m hoping to share soon (with comics! woo!), and I might even have a couple of surprise guest posts! Stay tuned…

Tutorial: Animate your spectrogram!

A couple of people have expressed interest, so here are a little tutorial describing how I created the spectrogram animation that you can find here:

You should know a few things:

1) I’m not an AE expert, you guys!

2) There are a lot of ways to accomplish what I’m about to explain; this is just one way, and it may not even be the best way.

3) If anything isn’t clear, then please let me know so I can try to fix it!

4) Many thanks to Christian Petropolis, who showed me how to do this in the first place!

What you’ll need

First, let’s start with ingredients, the things you’re going to need right from the get-go to be able to start this tutorial:

– A spectrogram! Here’s mine. Nothing special – just a standard Matlab-generated spectrogram in png format. Probably a jpeg or tif or even a Photoshop file would be just fine as well. Any image that you can import to After Effects.

– Your audio file! Again, mine has is just a wav file generated by Matlab. Make sure it’s the audio that matches your spectrogram!

– Adobe After Effects (sorry – I know it’s expensive – I managed to get a good deal on it the Adobe Creative Cloud through my University bookstore, if you’re a student or teacher you can probably find a way to get it relatively inexpensively)

Let’s get down to business

1) Open After Effects! Select File -> New -> New Project.

2) Select File -> Import -> File…

Then select the both the spectrogram image file and the accompanying audio file

3) Select both files from the Project tab and drag them onto the “Composition” icon:



4) A window will open; select “Single Composition”, and set the duration at approximately the length of your audio file (don’t worry, you can change this later if it’s not right)



5) In the Composition tab, right-click and rename the layers if you like:



6) Select the image layer (in my case, it’s called “Spectro”). Use Shortcut keys (cmd-C/ctrl-C and cmd-V/ctrl-V) or the Edit pull-down menu to make two duplicates.

7) Rename the two new layers to “GlowLayer” and “DarkLayer” (or whatever you like, that’s what I’m going to call them). Make sure “GlowLayer” is on top.


8) Click on the “GlowLayer” to make it active, select the “rectangle tool” icon, then draw a thin rectangle on the spectrogram. This will be the glowing bar that slides across as you play the sound.



9) With the “GlowLayer” still selected, Click the pulldown menu Effect -> Stylize -> Glow. The Glow effect controls should pop up in their own tab. Here’s how I set it up, but do whatever works for your image. The only things I changed were Glow Threshold (22%) and Glow Intensity (1.8).


10) Select Layer (pulldown) -> New -> Shape Layer, and drag that new layer to just below “DarkLayer”. I renamed my new Shape layer “darkRectangle”.


11) With “darkRectangle” layer selected, click the “Rectangle tool” button. Draw a Rectangle on your spectrogram like this (see image). Click the “Fill Options” and choose “solid color” and ~60% opacity, then click  “Fill color”, and choose a medium gray.


12) Select “DarkLayer”, choose the “Rectangle tool”, and draw another rectangle the same way as in step 11. The difference is that this time, because of the layer type, it’s going to make a mask, not a rectangle.

13) Click the check box to “Invert mask”. You should see the dark colored rectangle show through at this point.


14) Now for the fun part. Make sure the “scrubber” is at time = zero. The scrubber is the little yellow triangle on the timeline that lets you move forward and backward in time.

15) Click the triangle icon beside the Mask layer under “DarkLayer” to see more options.

16) Click the little clock icon beside “Mask Path”, and you should see your first “keyframe” show up (yellow diamond). If you’re unfamiliar with After Effects, you should know that it’s *all about keyframes*. Seriously.



17)  Drag your keyframe scrubber to the end of your audio segment.  The audio segment is shown by a green bar on my timeline.

18) Click on the mask layer to activate it, if it’s not already active (my mask layer is called “Mask 1” in the image above). Click the right arrow on your keyboard until the left side of the mask rectangle is lined up with the right edge of the spectrogram image. (hint: use shift+arrow to move in larger increments). Now if you move your scrubber back and forth you should see the “shadow” move across your spectrogram.


19) Now we’re going to animate our “GlowLayer” in much the same way. Select the GlowLayer, and again open up “Mask 1” options.

20) Move the scrubber to zero time again. Add a Mask Path keyframe.

21) Move the scrubber to the end of the audio segment.

22) Make sure “Mask 1” under the “GlowLayer” is selected, and move the glow bar over using the right arrow key. Stop when the right side of the rectangle mask is lined up with the right edge of the spectrogram.

23) Congratulations! You’ve animated your spectrogram. Now if you’d like to preview it, you can use the RAM preview. Just hit the little triangle on the far right of the preview panel, and there you go. It’ll run through it quickly once to render the video and audio, then you can behold your masterpiece. A little warning: if you’re computer is a bit slow, this preview can be jumpy. But once you export it, it should be just fine.


24) Exporting! File -> Export -> add to Render queue. Choose render settings and output location and click “Render” button. (Defaults are probably a decent place to start if you’re not sure).

Sea level rise in Pohnpei

I finally posted my latest video a couple of weeks ago. You can check it out here:

Sea Level Rise in Pohnpei from Michelle Wray on Vimeo.

This one was a lot of fun to make, and I learned a lot along the way. Unfortunately, there’s really not one single place to learn all of this, so I ended up watching tons of YouTube tutorials. At some point, I’d love to compile my own little set of tutorials to show others how to do this, but for now, here’s the overview:

1) Research the topic (interviews and lots of web searches mostly)

2) Storyboard: what order makes sense? What should I cover? This is a very rough draft and consists of words and some simple sketches

3) Write the script: Each paragraph is one, or maybe two “scenes”. My scripts have been about 1.5-2 pages long on MS Word (12 pt font)

4) Record script audio (I use a Blue Yeti mic)

5) Sketch out scenes in more detail, like this:

Pohnpei sketch

6) Adobe Illustrator: Draw things out using Illustrator + Wacom tablet (or mouse/trackpad). Save each section of the image as a separate layer. If I want to do the “drawing in” effect later, reverse the order of elements in each layer.

7) Adobe After Effects: Import illustrator file, along with audio. Use trim paths effect to make it look like it’s being drawn in. Synch audio and animation using keyframes.

8) Adobe Premiere Pro: Import scenes created in steps 6 & 7, put them in the right order, and add a soundtrack on top of it all.

That’s the gist of it anyway. Hope to give more details on specific sections at some point. For now, feel free to ask questions in the comments below.

Spectrogram animation

As some of you may know, I’ve been learning about video editing lately. For the most part, it revolves around the online introductory oceanography class I’m helping develop, but there are a couple of other things I’m playing around with too.

My family is visiting from Canada, and my brother-in-law, Christian, has been showing me some cool tricks in Adobe After Effects. I often use a spectrogram to help describe how we “look at” sounds. It is pretty straightforward if you’ve seen them before but can be a bit strange the first time you see one.

In the past, I’ve showed a spectrogram and then included a separate sound file to show how they work, describing how they fit together in words. But I’ve finally figured out how to pull it all together in After Effects, and it’s surprisingly easy.

This video is pretty simple and is not much more than a couple of masks and a glow effect, with (of course) strategically placed keyframes. Here it is – enjoy!

Argo Floats!!

If you have ever sat there and wondered to yourself, “What are Argo Floats? And why are they the coolest things ever?” – Look no further, my friend. I have just the (science-comics) video for you.  It’s based on an interview I did a while back with Rick Rupan, who runs the Argo Floats program at the University of Washington.

This format is a bit experimental for me, but I LOVE this style, and wanted to give it a whirl. I sort of just jumped in, and because I was (am!) so very clueless, I asked basically everyone I know (and some people who I don’t know) to give me feedback. The suggestions started rolling in, and I tried to incorporate as many as I could.

I really hope you enjoy this, and if I’m lucky, you’ll even learn something along the way 🙂 I do plan to make more of these, so if you have any feedback for me after watching this, it would be a great help.

Okay, enough babbling. Go ahead and watch the video already!




Doodle videos, heck yeah!

My latest project is figuring out how to make a doodle video. Or a speed drawing animation. I don’t actually know what they’re called, but you’ll get the idea if you check out the videos below. I started off attempting to use my iphone. Setting the iphone up to record was… awkward:

completely professional.

I did my video this way, posted it to youtube, and got a bunch of feedback from my awesome Facebook and Twitter friends. I tried to take that feedback into account as best I could and eventually moved on to my Nikon D5100 (dSLR).  I even set it up on an actual tripod (Thanks, Andrew Shao!).


Recording on my SLR was pretty easy too, once I got it adjusted. I did have some issues with focusing – not that it was hard, I just kept forgetting to check. oops. Editing for the SLR video was done in iMovie, and the audio was recorded in Hindenburg Journalist. Here’s a 30-second sample from this experiment. I hope that you agree that it has improved, at least marginally, from my first attempt!

Next, I decided to try doing the video using Camtasia. Luckily they have a free trial, so I gave that a whirl. Camtasia is for screen recording and video editing, and was really easy to jump into and start using right away. I did the drawings in Adobe Photoshop using my Wacom Intuos4 tablet/stylus. Here’s the 30-second sample using the “all-digital” method:

Dear reader, if you have been patient and kind enough to actually read all this, and more importantly, watch the two videos, please feel free to give me feedback in the comments (below) or in email or on Twitter or Facebook!

The best thing about Sharknado…

According to John, the best thing about Sharknado is that they drive around in a Landcruiser, quite a lot like this one:

John’s 1987 Toyota Landcruiser

If you know John at all, you know that he LOVES that truck, and spends large chunks of his free time fixing it, taking it apart, restoring it, replacing parts, etc.

Here’s a sample of his Sharknado commentary:


The commentary continued even after the Landcruiser was obliterated, even if only to berate all other vehicles that turned up in the movie. Especially Hummers.


All in all, despite the filmmaker’s poor decision to destroy the world’s best truck, Sharknado was as epic as we ever could have hoped for in our wildest dreams. We encourage you all to enjoy it this holiday season.


Harmful algal bloom forecasting: Chasing toxin producers

It started with “drunk” pelicans. While studying aquatic sciences at the University of Santa Barbara, Liz Tobin noticed that there was something wrong with the fish-gulping birds.  They exhibited unusual behavior, getting hit by cars, and sometimes simply falling out of the sky. The pelicans suffered from seizures induced by domoic acid poisoning, a particularly dangerous affliction when it hit mid-flight.  Biologists traced the source of the poisoning to rapid springtime growth (bloom) of Pseudo-nitzschia, a tiny, single-celled organism, distantly related to the giant kelps thriving in nearby coastal waters.  Small fish and shellfish consume these algae, and although they are not harmed, they store the toxins produced by algal cells and pass them on to their predators, including seabirds and marine mammals.

Scientists refer to these incidents as harmful algal blooms, or HABs.  They are not restricted to the waters of southern California, though.  And Pseudo-nitzschia cells are not the only culprits.  There are a number of other algal species that can cause trouble for local marine ecosystems. HABs occur in coastal waters around the globe, with larger and more frequent blooms being linked to a warming sea surface [1] and also to increased nutrient runoff from land [2], which is a common by-product of animal or plant agriculture.  Local economies suffer due to tourism losses and local residents can’t harvest shellfish from their beaches.

After wrapping up a bachelor’s degree at UCSB, Liz moved on to graduate school at the University of Washington so that she could study harmful algal species in Puget Sound.  One of the species she’s interested in is Alexandrium catanella, another single-celled marine alga that produces a suite of deadly neurotoxins causing paralytic shellfish poisoning, or PSP.  Alexandrium blooms can become so intense that they give the water a reddish-brown tinge, often called a “red tide” (although they don’t always cause water discoloration).


If you eat a shellfish that has accumulated Alexandrium toxins, a series of unpleasant events unfolds. First, your fingers and toes will tingle and your lips will feel numb.  Next, you may become nauseous and unsteady on your feet.  If the toxins hit you at full force, your diaphragm will become paralyzed, which means that you are unable to pull oxygen into your lungs.  Without prolonged artificial respiration, a severe case of PSP can lead to death in a matter of hours.

In recent years, poisoning cases in Washington State have been rare, but they do happen. Washington Department of Health aims to prevent outbreaks by regularly testing shellfish along Puget Sound beaches.  Commercially harvested shellfish are tested exhaustively before hitting the markets. Unfortunately, it’s tough to predict the severity, location, and timing of an Alexandrium bloom ahead of time. That’s where Liz’s work fits in: she’d like to improve our ability to forecast where those blooms will happen ahead of time so that public health officials, fisheries managers, and the shellfish industry have more time to react.


Alexandrium cells have a comfort zone, she explains.  They grow and divide furiously when the days are long and the surface waters are warm.  Once the spring bloom ramps up, zooplankton, fish, and shellfish feed on the algae, eventually decimating Alexandrium populations by early to mid summer.  Less Alexandrium cells means less food for their predators, and their predators begin to die off or find food elsewhere.  By late August, with newly lowered predation pressures and plenty of sunlight, a second bloom typically occurs.  As the season draws to a close, Alexandrium  algal cells feel the effects of shorter days and less sunlight.  They struggle to grow and divide and at some point cut their losses and transit down to the sediment to wait out winter.  -Settled into the muddy seafloor, they shift to survival mode, also known as the “cyst” stage.  The exact conditions that send them into their cyst stage are poorly understood, as are  the details of how they wake up and make their way to the surface come springtime.

Liz is particularly interested in how the Alexandrium cells swim, even though they are mostly at the whim of the currents.  Tides rush in and out twice each day, racing through narrows and washing languidly over mud flats.  Rivers pour into Puget Sound with irregular pulses of freshwater from rain and springtime snow melt. Alexandrium algal cells propel themselves through the water using a small whip-like appendage called a flagellum, but because of their tiny size – only about a third of the width of a single human hair – they can’t overcome even the weakest currents. However, if Liz knows their vertical swimming capabilities, she can better understand how quickly Alexandrium cells migrate from the seafloor to the surface (where blooms occur) and back again.

A lot of researchers, Liz included, study Alexandrium in the lab.  They blend solutions of water and various chemicals to imitate seawater, and observe how the cells behave and react to a variety of environmental triggers.  One of the labs she works in is a tiny bunker-like space on the second floor of the Ocean Teaching Building.  Two long black tables running along either side of the room are covered by a hodge-podge of beakers, elaborate video recording setups, and circuit boards sprouting nests of red and white wires.  At the back of the room, a giant insulated door opens to a fully programmable walk-in refrigerator, where algal cells can be subjected to controlled shifts in temperature and light.

Even at their best, however, lab experiments can’t possibly recreate the many complexities that exist in the natural world.  Liz knew that if she could monitor the emergence of the Alexandrium cells from the sediments she could combine those observations with detailed fluid flow models to better predict where the cells would eventually concentrate.  The only problem was that there were no “off-the-shelf” instruments that were capable of doing what she wanted.  So she set out to build her own.


When she starts describing the instrument she’s working on, I interrupt.  Pushing my notebook and pen across the table, I ask her to sketch her design for me.  She draws a tube a few inches in diameter, sitting upright on the seabed.  On one side of the tube a camera in waterproof housing points into a tiny window to capture any Alexandrium cells swimming past.  The end of a plankton trap sits on top, ready to catch any upward-swimming cells so they can be later compared with the video. She pauses her drawing to look up at me.  “It’s just like what we would do in the lab,” she says, “but instead of a camera looking at a tank, I have a camera looking into a chamber on the seafloor”.

As of mid-July 2013, Liz’s prototype of her seafloor camera was nearly ready to be deployed for testing.  If it works, she’ll try to capture Alexandrium cells emerging from the seafloor just prior to the late-summer bloom.  Her vision for the future, she tells me, is a network of these instruments deployed across Puget Sound, monitoring cyst emergence in real time.  Instead of only knowing about blooms once they are in full swing, they could be predicted days or weeks in advance, avoiding unnecessary closures and potentially saving lives.

[1] Climate change and harmful algal blooms, NCCOS

[2] Anderson, Donald M., Patricia M. Glibert, and Joann M. Burkholder. “Harmful algal blooms and eutrophication: nutrient sources, composition, and consequences.” Estuaries 25, no. 4 (2002): 704-726.