Wave interference

I’m TAing a marine GIS/ocean mapping course this quarter, and will be teaching a lecture on some aspects of multibeam sonars and the data that you get from them. Which is fun since I haven’t really thought much about multibeam sonars since I worked at a sonar company about 6 years ago.

I was trying to figure out how to explain how a longer array means you can get a narrow beam. It’s all about interference patterns, right? So I wrote a little script in Python (you can access the script directly on my Github page).

Let’s say you have two elements spaced a half wavelength apart. You get something like this, with one main lobe:

int_halflambda_close_500px

Cool – there’s just one main lobe of higher amplitude. But then if you pull those elements just a bit further apart – I’m showing a 5 wavelength separation here – you can see a completely different pattern:

int_fivelambda_close_500px

So: the wider separation made more beams and and those beams were narrower. Interesting…. What if we wanted to have just one main beam that we could maybe steer around? (ahem. maybe a little like a multibeam sonar??) The next picture shows a single beam produced by a line of 20 elements, all spaced at half a wavelength apart from each other. This time we’re zooming out a bit – showing 30m x 30m this time. Also this simulation shows what it would look like with a 200 kHz signal – which is a pretty common frequency for a shallow water multibeam sonar.

Beam pattern example

Interpolation using Scipy/Numpy

As part of a little script I’m writing, I need to do some simple linear interpolation. The Matlab equivalent of what I’m doing is called ‘interp1’. So far I’ve come across two ways to do 1-d, linear interpolation. One is ‘interp’ in numpy. The other is ‘interp1d’ in scipy.interpolate. I’m not completely sure of the difference. Is one faster? Is one older? I’ve googled around a bit, but still haven’t found a clear answer. For now, I’ve implemented ‘interp1d’, which is less similar to Matlab’s ‘interp1’. You first define an interpolated object given your x and y vectors. Then you call that object with your new x-values to generate the new y-values.

And here is an example, chopped out of my code:

[sourcecode language=”python”]
depthvector1 = r_[0:nadirdepth:DepthIncrement]
interpfun1 = interp1d(depth1,ssp1) # my x and y vectors
sspvector1 = interpfun1(depthvector1) # find the new y’s for this x
[/sourcecode]

Reverse axis direction in Pylab

I have been trying to remember how to do this FOREVER! I’m trying to plot a sound speed profile with depth along the y-axis and sound speed along the x-axis. But if I don’t reverse axes, the depth increases up, which isn’t really very intuitive.  In Matlab, it’s pretty easy.  But how to do it in Python/Pylab?  Here’s what the plot looked like before reversing the y-axis.

Here’s the code I used to reverse the axis:

[sourcecode language=”python”]
from pylab import *

### SKIPPING SOME CODE HERE

# Plot sound speed profiles
plot(sspvector1,(depthvector1))
plot(sspvector2,(depthvector2),’r’)

# Reverse the y-axis
ax = gca()
ax.set_ylim(ax.get_ylim()[::-1])

title(‘Sound speed profiles’)
xlabel(‘Sound speed (m/s)’)
ylabel(‘Depth (m)’)
grid()
show()
[/sourcecode]

And here’s the result:

Tkinter and GUI building

I’m back to ray tracing again. And since I’m trying to re-create and expand on the Matlab version of this code, I am finding myself having to learn lots of little details – and big details. One of these things is GUI building. The standard GUI builder that comes with Python is TKinter. I’ve used this a bit before, but now I’m having to think about doing more complicated things with it. The Matlab version of my code used an interactive GUI. In Matlab, there’s a utility called GUIDE that gives you a GUI to build a GUI 🙂 You can put buttons, sliders, graphs, etc, into your GUI window, and then it will automatically generate a skeleton script that you can fill with commands telling Matlab what to do with it. So now I’m wondering if there’s something I can use with TKinter that does a similar thing. I’m sure there is, and I just need to find it…

Multibeam files and the Unix file command

My friend Kurt is trying to compile a collection of multibeam sonar file formats to eventually submit to the author of the Unix file command, and I’m trying to help him get the s7k data format added. He pointed me to this post, where he talks about the formats that he has, and the ones he’s looking for. He also gave me this link to an LDEO page where sonar data format documents for various sonars are made available. Cool! All of this reminds me of how I wanted to install GMT and MB-System on my computer… I wonder if my little netbook would be able to handle it.

Installing BitString

I’m installing BitString right now. It seems like what I need. The first sentence on the BitString Google Code website is: “bitstring is a pure Python module designed to help make the creation, manipulation and analysis of binary data as simple and natural as possible.” Perfect. The first thing I did was download the manual (pdf) and the zipped installation file for my version of Python (2.6).

Unzipping zipped files in Ubuntu is really easy – just right click on the file in the File Browser, and choose to Unzip using Archive Manager.  But I wanted to know how to do it using the command line.  Here is an example:

[sourcecode language=”bash”]
unzip bitstring-1.1.3.zip
[/sourcecode]

I installed the contents of the file into the appropriate Python Module locations on my computer by going to the directory where the unzipped files were, and typing the following into the command line:

[sourcecode language=”bash”]
sudo python setup.py install
[/sourcecode]

Now I’m ready to dig in!  Luckily, a friend of mine at work helped me out with some pointers on how to read in the s7k binary file.  He explained the commands he’d used when creating his Matlab scripts, and how the structure was defined.  He’s not a Python user, so couldn’t give me any specific Python tips, but it was enough for me to get started.  So now I have to figure out the BitString part.  The reason I’m not going with struct or array is because it seems that these are meant to work with whole bytes and are clunky when it comes to parsing out individual bits. BitString is designed with more flexibility.

Fortunately BigString seems pretty straightforward.  I had a very quick look through the manual, and if I understand correctly, I will start by converting the s7k file to a BitString object, then just read through it bit by bit (or byte by byte).

New project: use Python to read multibeam data

Am I getting in over my head before I’ve learned the basics?  Most likely.  But I find that if I set my goals high, I learn lots of unexpected things along the way.  This latest goal is probably not going to be completed from start to finish in any kind of linear fashion, and I will probably drop it and come back to it several times before its completion.

I’m hoping to figure out how to read S7K multibeam data format.  Not a simple challenge for someone like me who can barely piece together a print statement.  This isn’t like reading an ASCII text file.  I started reading the Data Format Definition document (DFD), and came upon some pretty daunting tasks right away, including reading headers, different number formats, and lots of bits and bytes stuff.  Scary, but sort of exciting.  When I took a computer science class way back in undergrad, I remember learning all the really basic stuff, but since I didn’t have a real application for it, it was sort of meaningless to me, and therefore did not stick in my brain.  But now it’s fun!  (I’m a nerd).  Hopefully it’ll stick this time 🙂

These links might be helpful:

Reading and Writing data using Python’s input and output functionality

Understanding Big and Little Endian

The Learning Python book says (p. 901): “If you are processing image files, packed data created by other programs whose content you must extract, or some device data streams, chances are good that you will want to deal with it using bytes and binary-mode files.”

Smiling seafloor

One of the most frustrating things that I run into when processing multibeam echosounder data is a smiling seafloor.  When the sound speed profile is incorrect, the computed seafloor becomes curved either up or down.  This is because the effect of an incorrect profile is more pronounced on the longer ranges (outer beams) than at nadir.  So the data is either smiling or frowning.  I should also note that the smiling/frowning could arise from incorrect beam steering angle, which can also be caused by having the wrong sound speed reading at the sonar head.

I spend far too much time messing around with sound speed profiles during post processing.  In the ocean, the speed of sound profile can change rapidy – both spatially and temporally.  If the profile changes while you’re surveying, your data will be wrong.  An interesting paper published by some of my favorite people can be found here:

IHO paper:  Estimation of Sounding Uncertaintly from Measurements of Water Mass Variability

So a lot of times I’ll end up having a profile that’s not quite right, and having to tweak it in my processing software, which takes a really long time.  I read an interesting paper recently by some folks at the Acoustic Remote Sensing Group in Delft University of Technology:

Underwater Acoustic Measurements Conference 2009: An efficient method for reducing the sound speed induced errors in multibeam echosounder bathymetric measurements

This paper discusses using the overlapping area between parallel lines to obtain an estimate of the sound speed throughout the water column.  The authors assumes that in shallow water the sound speed profile can be approximated by a single value.  The differences between overlapping lines are minimized using a Gauss-Newton method.

I’d like to look into this in more detail, but also maybe look at approaching the problem in a slightly different way – what if we looked at the overlap between crosslines instead of parallel lines?  The sound speed smiley error doesn’t show up in the along-track direction.  So if you did a beam-by-beam analysis of one line versus the perpendicular line, you would see the sound speed profile pop out.  If you average the difference in depth between each of the beams and an interpolated position on the crossline, you would end up with an average representation of the smile…  Okay, I might need to think about this a bit more.  But I’m imagining somehow running your entire survey, and then running a crossline across the whole thing, and computing profiles over the entire area.  And then maybe doing a whole checkerboard grid – type survey, and doing a full blown analysis on spatial and temporal sound speed variation.  I may be getting ahead of myself a bit.  But I think it’s worth pondering…