Let a and b be non-zero scalars. prove that if {x,y} are linearly indepenent vectors, then so are {ax+by, ax-by}
-
Suppose c_1 and c_2 are scalars with
c_1 (ax + by) + c_2 (ax - by) = 0.
Regrouping this as a multiple of x plus a multiple of y we deduce that
(c_1 a + c_2 a) x + (c_1 b - c_2 b) y = 0.
Since x and y are linearly independent we deduce that both of these coefficients must be 0, and get the system
(1) c_1 a + c_2 a = 0,
(2) c_1 b - c_2 b = 0.
Since a is nonzero we can divide both sides of (1) by it and deduce that (3) c_1 + c_2 = 0, and since b is nonzero we can divide both sides of (2) by it and deduce that (4) c_1 - c_2 = 0. Adding (3) to (4) we deduce that 2 c_1 = 0 and hence that c_1 = 0, and then plugging this into either of (3) or (4) we see that c_2 = 0 also.
So we have deduced that if c_1 and c_2 are scalars with c_1 (ax + by) + c_2 (ax - by) = 0, then c_1 = c_2 = 0. This shows from the definition that {ax + by, ax - by} is linearly independent.
There are simpler ways to do this exercise but I thought I would do it in a way that maybe allowed you to see how it might generalize.
It is in fact true that whenever a, b, c, d are numbers with the property that the 2x2 matrix M with first row a, b and second row c, d is invertible, then if {x, y} is linearly independent, the set {ax + by, cx + dy} is also linearly independent. The proof is similar: assuming one has scalars c_1 and c_2 with c_1 (ax + by) + c_2 (cx + dy) = 0 one deduces that (c_1 a + c_2 c) x + (c_1 b + c_2 d) y = 0, and from linear independence of {x,y} one deduces the system of equations a c_1 + c c_2 = 0 and b c_1 + d c_2 = 0. This system of scalar equations is equivalent to the matrix equation M^T v = 0, where M^T denotes the transpose of the matrix M mentioned earlier, and v denotes the 2x1 column vector with entries c_1, c_2. Since M is invertible, so is M^T, and we deduce from M^T v = 0 that v = 0, ie, that c_1 = c_2 = 0. If you take c = a and d = -b here, you get your problem back, and the condition that M be invertible is precisely the condition that det(M) = -2ab be nonzero, ie, that a and b both be nonzero.
There is an generalization of this to n-tuples of vectors. e.g. when n = 3 it states that if {x,y,z} is linearly independent, then whenever the 3x3 matrix with first row a, b, c, second row d, e, f, and third row g, h, i is invertible, the set {ax + by + cz, dx + ey + fz, gx + hy + iz} will be linearly independent also. I hope this helped.
c_1 (ax + by) + c_2 (ax - by) = 0.
Regrouping this as a multiple of x plus a multiple of y we deduce that
(c_1 a + c_2 a) x + (c_1 b - c_2 b) y = 0.
Since x and y are linearly independent we deduce that both of these coefficients must be 0, and get the system
(1) c_1 a + c_2 a = 0,
(2) c_1 b - c_2 b = 0.
Since a is nonzero we can divide both sides of (1) by it and deduce that (3) c_1 + c_2 = 0, and since b is nonzero we can divide both sides of (2) by it and deduce that (4) c_1 - c_2 = 0. Adding (3) to (4) we deduce that 2 c_1 = 0 and hence that c_1 = 0, and then plugging this into either of (3) or (4) we see that c_2 = 0 also.
So we have deduced that if c_1 and c_2 are scalars with c_1 (ax + by) + c_2 (ax - by) = 0, then c_1 = c_2 = 0. This shows from the definition that {ax + by, ax - by} is linearly independent.
There are simpler ways to do this exercise but I thought I would do it in a way that maybe allowed you to see how it might generalize.
It is in fact true that whenever a, b, c, d are numbers with the property that the 2x2 matrix M with first row a, b and second row c, d is invertible, then if {x, y} is linearly independent, the set {ax + by, cx + dy} is also linearly independent. The proof is similar: assuming one has scalars c_1 and c_2 with c_1 (ax + by) + c_2 (cx + dy) = 0 one deduces that (c_1 a + c_2 c) x + (c_1 b + c_2 d) y = 0, and from linear independence of {x,y} one deduces the system of equations a c_1 + c c_2 = 0 and b c_1 + d c_2 = 0. This system of scalar equations is equivalent to the matrix equation M^T v = 0, where M^T denotes the transpose of the matrix M mentioned earlier, and v denotes the 2x1 column vector with entries c_1, c_2. Since M is invertible, so is M^T, and we deduce from M^T v = 0 that v = 0, ie, that c_1 = c_2 = 0. If you take c = a and d = -b here, you get your problem back, and the condition that M be invertible is precisely the condition that det(M) = -2ab be nonzero, ie, that a and b both be nonzero.
There is an generalization of this to n-tuples of vectors. e.g. when n = 3 it states that if {x,y,z} is linearly independent, then whenever the 3x3 matrix with first row a, b, c, second row d, e, f, and third row g, h, i is invertible, the set {ax + by + cz, dx + ey + fz, gx + hy + iz} will be linearly independent also. I hope this helped.