Back to work

I guess all vacations have to come to an end.  Back to work!  Back to database stuff – which is pretty fun.  Also planning out what I need to do in the next two months before the DCL workshop – less fun, but very important.

 

Building a Datascope database

Exciting times, my friends!  That’s right.  Hold on to your hats, it’s database building time. (Mom, you don’t have to read this one, I won’t be offended).

I have recently downloaded a pretty big dataset from Neptune Canada ocean bottom seismometers.  After a bit of discussion with (and guidance from) my advisor and with Kate S., I’ve decided to bite the bullet and just put the data into a Datascope database.  With the help of the internets, and a few man pages (for non-unix readers – man pages are basically help files), I’ve managed to hack together a sort of test database.  And since I forget such things easily, I am going to try to document exactly what I’ve done so far.

[I should probably add in at this point that Datascope is a relational database system that is part of Antelope, and these instructions assume you have already installed Antelope and Datascope, and are just interested in starting up a brand new database from some fresh data.]

As usual, when I’m doing something that is WAY over my head, I like to make a little test directory:  it’s called dbsandbox.  In a Terminal window, I navigated to my new sandbox directory, and typed the command:

dbbuild dbtest

Where dbtest is the name of my database.  This brings up a gui display – btw, you can also just create a configuration file and run a batch command, I just like GUIs, particularly if I have no idea what I’m doing.

I used the Iris entry for my first station to get the basic information.  I had to guess at some fields, but it did manage to not crash when I filled it in as follows:

I had to guess on the serial numbers and also on the Datalogger type.  I got a message telling me which records were added, and now I had an empty database!  Woo hoo!

The next step was actually adding data.  This is done using the following command:

miniseed2db -v ../pathtodata/* dbtest

Where dbtest is still the name of my database, pathtodata is (duh) the path to my data, and the -v is to request verbose output (tell me what’s happening, please!).

So there it is. I know I made a mistake somewhere because I added all channels together. But I can sort that out later – details, right? To test my shiny new database:

dbe dbtest

and

dbpick dbtest.

And hey, presto! There it is. Now I just need to figure out an efficient way to add all of that miniseed data that is organized into hundreds of folders by year and julian day.

** Update:  Will just gave me a hint – if I use miniseed2db /path/* where path is the upper-most folder, it will look into all subfolders and grab any miniseed files. It seems to be working!

SQLite configuration file

I just created a simple configuration file for SQLite. It’s in my home directory, and is called .sqliterc. It contains the following lines:
[sourcecode language=”bash”]
# SQLite configuration file
.echo ON # repeat every command
.header ON # print column names
.separator “t” # change default separator to tab
.nullvalue “Null” # print the word “Null” rather than having empty fields
[/sourcecode]

Now, every time I start SQLite, these commands will be run.

* Thanks to Kurt, who showed me how to do this!