Hi there,
I'd like to know if anyone can "prove" (or explain) why the following assertion is true :
det(matrix)=0 <==> lines or columns are linearly independant
Cheers

Hi there,
I'd like to know if anyone can "prove" (or explain) why the following assertion is true :
det(matrix)=0 <==> lines or columns are linearly independant
Cheers
Actually that assertion is false, the determinant of a matrix is zero if, and only if, the columns are linearly DEPENDENT. You should try proving it for yourself first, or show us what you've done if you already have tried. But i'll give you a hint, it has to do with the reduced row echelon form of the matrix.
Yes of course, my mistake, I forgot the slash after the "equal" sign to mean "different to".
Anyway, I've tried doing it with a 3 by 3 matrix as follows :
a b c
d e f
g h i
But I can't transform it to it's row echelon form because, I would have to mutiply a line by one of the figures.
For example, I would multiply line 1 by d so the matrix would become :
ad bd cd
d e f
g h i
and then I would multiply line 2 by a to get :
ad bd cd
ad ae af
g h i
After this, I would substract line 1 from line 2 :
ad bd cd
0 aebd afcd
g h i
The problem being that all this is only allowed if I'm sure that a and d aren't equal to 0, otherwise the "new" matrices have different determinants.
I've also tried using simoultaneous equations to try and "find out" what conditions are required so that none of the rows or columns are linear combinations of others,
and I had exactly the same problem.
So I can't think of anything else that generalises the determinant to any square matrix.
I agree with wallaby in letting you work out the formal proof yourself.
As for an explanation why the determinant is zero if the columns are dependent, remember the geometrical visualization of what a determinant is  its absolute value is simply the area/volume spanned by the column vectors. If those vectors are linearly dependent, then the area/volume is obviously zero because the vectors point in the same direction.
Believe me I've tried everything I can think of to work it out for myself, that's how I would have wanted to find it out.
The geometrical visualization is a good point, I had realized that, except that it can only be visualized in two or three dimensions.
And I'm sure there must be a more algebraic way.
Do you actually know it ? Or are you kind of guessing it can be worked out ? Maybe what I did in my previous post is the only way and therefore the problem needs to be dealt with in all the different possible cases, being "a=0" and "a=/0", etc...
In theory the method you presented in your post above, which it seems was to actually perform the row reduction of a general 3x3 matrix, could work. (but there would be some complicated algebra involved) An easier method would be to recall, if you've learnt this before, that the row reduced matrix (U) can be expressed in terms of some matrix (A) by the product of elementary matrices. These elementary matrices are defined so that multiplying them by the matrix A is the equivalent of carrying out a row operation. (swapping, scaling, subtraction) So the kth row operation performed on the matrix A to yield the row reduced form will be denoted by and thus we may represent the matrix U by,
You can take my word for it or find more information here, but the determinants of these elementary matrices are nonzero. (proving this would probably be a good exercise to run through) So now using the properties of determinants and the equation i provided above you should be able to fill in the blanks, just remember what the matrix 'U' will look like if the columns are linearly independent and what this means for the determinant.
Well, I would do it simply this way :
1. Proposition : determinant is zero if the column vectors are linearly dependent
2. If two column vectors A and B are linearly dependent, then their cross product A x B is zero.
3. If the cross product A x B is zero, then so is the determinant formed by taking A and B as column vectors of a submatrix
4. Repeat (2) and (3) for each combination of column vectors in the matrix
5. If all subdeterminants above are zero, then so is the total determinant of the original matrix.
Not sure if this is mathematically rigorous ( I am not a mathematician ), but that would be my approach to proving this.
Hey that's a cool way of seeing it for someone who isn't a mathematician, there is one small problem though :
As you said, if the three subdeterminants are zero, then the determinant of the matrix is obviously zero.
But the determinant of the matrix could very well be zero with two or even three of the subdeterminants not being zero.
So your proof is good for showing that in some cases, linear dependance gives a zero determinant, but it doesn't show anything for the cases where determinant is zero without the three subdeterminants being zero.
And it doesn't go the other way, saying that if the determinant is zero then there is linear dependance.
But nice thinking
You don't need to know much of anything about basis changes, although i realise that i have been assuming a few things about what you know. A set of vectors , for will be linearly independent iff the equation,
,
implies that all of the are equal to zero.
We can express that equation above in matrix form, . Now if the determinant of 'A' is not equal to zero then the matrix will have a multiplicative inverse . Thus the solution to the matrix equation, or the linear system, will be . What this means for us is that if then any vector in the set cannot be written as a linear combination of the others.
So what was all of that nonsense about reduced row echelon form? well if you're like me and can never remember why implies that A has no multiplicative inverse then the above won't seem very satisfying, not to say that it will anyway. (Basically i changed my mind about what was the easiest way to prove the initial assertion and am trying to cover my ass)
Well, if you start with det(A) = 0 = 1 + 1  2 = 1* [0*1  (1)*1]  1*[1*1  (1)*(2)] + (2)*[1*1  0*(2)] you could say that it's the determinant of this matrix :
1 1 2
1 0 1
2 1 1
And then you can multiply that matrix by whatever you want, or you can even multiply lines or colums of it by whatever you want, and the determinant will still be zero, so there are loads of this kind of matrix
This is one of those things that you'll see a proof of once and then just take as fact there after, as a result the details of a rigorous proof are a bit fuzzy but i think the following can provide a reasonably compelling argument of why it's true.
Given an n x n matrix A, some matrix will be a multiplicative inverse to 'A' if the following identity holds. . By the properties of determinants it follows that,
.
In the event that then we will not be able to define a determinant for . If exists then its elements should be finite in value and we should be able to calculate a finite determinant, thus an indeterminate determinant (i hate that i just used those two words together) would be an indication that the inverse does not exist.
like i said, not rigorous but it's for this very reason that the methods of finding an inverse break down. (they seem to contain a 1/det(A) term)
Also like this explanation. I'm starting to read of many different methods of proof on this discussion, if I organize all these ideas I may get to something.
The only problem I have here, is that I'm pretty sure that determinants were "invented" if I may say, before working out that det(A*B)=det(A)det(B).
Maybe I can make myself a little clearer about where I'm trying to get to :
With a 2x2 matrix, it's really easy to see that both columns/lines are independant or not, you just look at proportionnality. And so, the concept of looking to see whether three lines or columns are independant must be some kind of "extension" of the idea of proportionnality between three objects, and I'm amazed that I can't work it out.
Wouldn't this be better off in the Math forum?
If wikipedia is to be believed then the notion of a determinant has been around since ancient times, but studied more seriously at the end of the 16th century. Apparently the equality between the determinant of a product and the product of determinants was not formally proven until 1812, courtesy of Cauchy and Binet. So how would we have known that a nonzero determinant implies the existence of a solution, to a system of linear equations, before 1812? Well i think that comes down to noticing that a system with a solution can be reduced to an upper triangular form, the determinant of which is evidently nonzero, while systems that are not in upper triangular form after sufficient row operations will have a determinant of zero. (As will be the case with the original system) This much would have been within the grasp of early mathematicians, but personally i like to reap the benefits of their labour and go with the easier explanation. (then again applied maths is more my thing)
Of course ! And I'm not trying to redo all their work, because progress is moving forward. I'm just very curious about it and it disturbs me somehow not to be able to understand it. So, you can't tell whether a system of equations can be reduced unless you know what the figures are, because to reduce it you have to multiply lines by those figures, but if some of them are equal to zero you can't anymore.
So therefore, I'm thinking how did they work out "a calculation that tells whether systems can or can't be reduced, without knowing what their coefficients are", and that would consequently work for any square system of equations.
« Psuedoscience turning to reality:Invisibility technology  SpaceTime Question » 
Tags for this Thread 