I never did manage to resuscitate my old blog, but Google does cache some webpages, so I have access to my old posts that way for the time being. I’m not too worried about most of them, but a few were pretty handy. And since I’m interested in learning about inverse theory, this seems appropriate. So before it disappears forever, I will re-produce it here…

This post is actually a practice in least squares line fitting, linear algebra, Python, and Numpy, all mashed into one. In preparation for the possibility of learning geophysical inverse theory at some point in the future, I have backed up, waaay back to the beginning. Fitting a line to a set of points when there are more points than required to uniquely define the line.

I’m doing this example in 2D, and I’m basically following Gilbert Strang’s Linear Algebra, p 206-211 (section on least squares approximations).

I started with 3 points:

`(0,6)`

(1,0)

(2,0)

I saved these in a file called “points.txt” (just for flexibility)

I started my Python script (lspractice.py) in the normal way, importing what I thought I’d need:

#!/usr/bin/python # Fitting a line to a set of points # Following an example from Linear Algebra by Gilbert Strang, p 206-211 # The points are in a file called points.txt from numpy import * from pylab import *

–*don’t forget to make executable using `chmod +x lspractice.py`

Next, I opened the text file and filled in my x and y variables:

# Open text file (comma delimited x/y coords) f = open('points.txt',"r") lines = f.readlines() xi = [] yi = [] for line in lines: fields = line.split(',') xi.append(float(fields[0])) yi.append(float(fields[1]))

Next – setting up the model:

# Filling these points into the equation of a line (the functional model) # y = C + Mt # F1: C + M(0) = 6 # F2: C + M(1) = 0 # F3: C + M(2) = 0 # unknowns are C and M. Take partial derivatives of each of the functions wrt C and M. These results will populate the design matrix. dFdM = np.array(xi) dFdb = tile(1,shape(dFdM)) A = np.array([dFdb,dFdM]) At = np.transpose(A) b = (np.array(yi))

And finally, working out a solution for C and M:

# The solution to this type of problem is At*A*xhat = At*b # OR xhat = inverse(At*A)*(At*b) xhat = np.dot( linalg.inv(np.dot(A,At)) , np.dot(b,At))

And a little sanity check:

# Now a little graphical check xtest = arange(min(xi)-3,max(xi)+5,1) ytest = xhat[0] + xhat[1]*xtest plot(xi,yi,'o') plot(xtest,ytest,'.-r') grid() title('Fitting a line to 3 points using <strong>least</strong> <strong>squares</strong>') show()

So this is what it looks like in the end. The original points are shown as blue dots. The line described by the parameters that minimize the sum of the squared residuals is in red. Yeah, it’s embarrassingly simple, but at least it’s a baseline to go on.

Also worth noting – I have not fully figured out Python/Numpy’s matrix multiplication yet. In order to get it to work, I had to use numpy.dot(B,A) when I wanted to do A*B (I had to reverse the order of the arrays). According to the documentation this should not be the case. So I’m stumped. Maybe I’m just not getting the array dimensions right to begin with. That’s what I get for being a Matlab user I guess 🙂