A =
[1 -2 -1 0 1]
[0 0 -1 1 1]
[-1 2 0 2 2]
[0 0 1 2 5]
[1 -2 -1 0 1]
[0 0 -1 1 1]
[-1 2 0 2 2]
[0 0 1 2 5]
-
Suppose each column is a vector. The column space is the span of the column vectors. The basis of the column space is the set of linearly independent vectors that span the column space.
Step 1: Reduce to reduced row echelon form: http://www.math.odu.edu/~bogacki/cgi-bin...
The reduced row echelon form is:
[1 -2 0 0 2]
[0 0 1 0 1]
[0 0 0 1 2]
[0 0 0 0 0]
For simplicity, we take the columns containing leading entries as the linearly independent columns, which are columns 1, 3 and 4. So we take columns 1, 3 and 4 in the ORIGINAL matrix A, as the basis for the column space.
Therefore, the basis for the column space is: { (1,0,-1,0) , (-1,-1,0,1) , (0,1,2,2) }
The next step is to orthogonalise it as follows: (This method is based on the Gram-Schmidt process)
Given the basis {b1, b2, b3}
Let u1 = b1
Let u2 = b2 - (b2•u1)u1
Let u3 = b3 - (b3•u1)u1 - (b3•u2)u2)
And {u1, u2, u3} is an orthogonal basis
Step 1: Reduce to reduced row echelon form: http://www.math.odu.edu/~bogacki/cgi-bin...
The reduced row echelon form is:
[1 -2 0 0 2]
[0 0 1 0 1]
[0 0 0 1 2]
[0 0 0 0 0]
For simplicity, we take the columns containing leading entries as the linearly independent columns, which are columns 1, 3 and 4. So we take columns 1, 3 and 4 in the ORIGINAL matrix A, as the basis for the column space.
Therefore, the basis for the column space is: { (1,0,-1,0) , (-1,-1,0,1) , (0,1,2,2) }
The next step is to orthogonalise it as follows: (This method is based on the Gram-Schmidt process)
Given the basis {b1, b2, b3}
Let u1 = b1
Let u2 = b2 - (b2•u1)u1
Let u3 = b3 - (b3•u1)u1 - (b3•u2)u2)
And {u1, u2, u3} is an orthogonal basis