Notices
Results 1 to 24 of 24

Thread: Linear Algebra

  1. #1 Linear Algebra 
    Moderator Moderator AlexP's Avatar
    Join Date
    Jul 2006
    Location
    NY
    Posts
    1,838
    I'm also working on linear algebra, and I'm not totally confident on this one and was hoping someone could verify it (or not) for me.

    Let and be two subspaces of a vector space . Suppose is a subspace of . Then either or . (it is actually an if and only if but I have the other direction)

    To show the contrapositive, suppose that and . Then there exists an that is not in and a that is not in . So then we have but since , are not in the same subspace we do not (necessarily?) have so it is not a subspace of .

    It's the 'necessarily' part that's bothering me. I mean, I think it works even if 'necessarily' belongs in there, but I'd like to know whether it does or not.


    "There is a kind of lazy pleasure in useless and out-of-the-way erudition." -Jorge Luis Borges
    Reply With Quote  
     

  2.  
     

  3. #2  
    Forum Professor
    Join Date
    Jul 2008
    Location
    New York State
    Posts
    1,006
    x+y is not in W1 and it is not in W2, so it is not in the union.


    Reply With Quote  
     

  4. #3  
    Moderator Moderator AlexP's Avatar
    Join Date
    Jul 2006
    Location
    NY
    Posts
    1,838
    yeah, thanks.

    I've encountered two problems that seem overly simple and I wonder if I'm missing something that needs to be a part of the proof.

    In let denote the vector whose th coordinate is 1 and whose other coordinates are 0. Prove that generates .

    Is there any more to this than just saying that if we let , then since we can form the linear combination ?

    And then... Prove that for any element in a vector space.

    If our vector space is over , then isn't that set really just the definition of the span of ? Do we have to explicitly show that the span contains no sums with other vectors or something? Or is it just that simple?
    "There is a kind of lazy pleasure in useless and out-of-the-way erudition." -Jorge Luis Borges
    Reply With Quote  
     

  5. #4  
    . DrRocket's Avatar
    Join Date
    Aug 2008
    Posts
    5,486
    Quote Originally Posted by AlexP
    yeah, thanks.

    I've encountered two problems that seem overly simple and I wonder if I'm missing something that needs to be a part of the proof.

    In let denote the vector whose th coordinate is 1 and whose other coordinates are 0. Prove that generates .

    Is there any more to this than just saying that if we let , then since we can form the linear combination ?

    And then... Prove that for any element in a vector space.

    If our vector space is over , then isn't that set really just the definition of the span of ? Do we have to explicitly show that the span contains no sums with other vectors or something? Or is it just that simple?
    I think you have it.

    Unless your text has some oddball definition of "span" there is nothing to prove. The only "linear combinations" of a single vector are scalar multiples.

    What text are you using ?
    Reply With Quote  
     

  6. #5  
    Moderator Moderator AlexP's Avatar
    Join Date
    Jul 2006
    Location
    NY
    Posts
    1,838
    I'm using Linear Algebra 3rd ed. by Friedberg, Insel, and Spence. I wasn't sure what to use so I asked the professor that I was doing the reading course with this past semester. It's the book they use for the second level linear course at Syracuse University.

    Since buying this, I remembered about the linear book by Lang. Is that a good one to start with? I am finding this one to be ok so far though, and seemed to get good reviews on Amazon.
    "There is a kind of lazy pleasure in useless and out-of-the-way erudition." -Jorge Luis Borges
    Reply With Quote  
     

  7. #6  
    . DrRocket's Avatar
    Join Date
    Aug 2008
    Posts
    5,486
    Quote Originally Posted by AlexP
    I'm using Linear Algebra 3rd ed. by Friedberg, Insel, and Spence. I wasn't sure what to use so I asked the professor that I was doing the reading course with this past semester. It's the book they use for the second level linear course at Syracuse University.

    Since buying this, I remembered about the linear book by Lang. Is that a good one to start with? I am finding this one to be ok so far though, and seemed to get good reviews on Amazon.
    I don't know the Friedberg et al book, and haven't read Lang's linear algebra book either. However I am familiar with several other books by Lang, and they are all good.
    Reply With Quote  
     

  8. #7  
    Moderator Moderator AlexP's Avatar
    Join Date
    Jul 2006
    Location
    NY
    Posts
    1,838
    I'm really stuck on the proof of this theorem...

    Replacement Theorem: Let be a vector space that is generated by a set containing exactly elements, and let be a linearly independent subset of containing exactly elements. Then and there exists a subset of containing exactly elements such that generates .

    Proof. The proof is by induction on . The induction begins with ; for in this case , and so taking gives the desired result.
    Now suppose the theorem is true for some integer . We prove that the theorem is true for . Let be a linearly independent subset of consisting of elements. By the corollary to Theorem 1.6 (if and is linearly independent, then is linearly independent) is linearly independent, and so we may apply the induction hypothesis to conclude that and that there is a subset of such that generates . ... (from Linear Algebra, 3rd ed., Friedberg et al.)

    How exactly is the induction hypothesis being used? I'm really missing something.
    "There is a kind of lazy pleasure in useless and out-of-the-way erudition." -Jorge Luis Borges
    Reply With Quote  
     

  9. #8  
    . DrRocket's Avatar
    Join Date
    Aug 2008
    Posts
    5,486
    Quote Originally Posted by AlexP
    I'm really stuck on the proof of this theorem...

    Replacement Theorem: Let be a vector space that is generated by a set containing exactly elements, and let be a linearly independent subset of containing exactly elements. Then and there exists a subset of containing exactly elements such that generates .

    Proof. The proof is by induction on . The induction begins with ; for in this case , and so taking gives the desired result.
    Now suppose the theorem is true for some integer . We prove that the theorem is true for . Let be a linearly independent subset of consisting of elements. By the corollary to Theorem 1.6 (if and is linearly independent, then is linearly independent) is linearly independent, and so we may apply the induction hypothesis to conclude that and that there is a subset of such that generates . ... (from Linear Algebra, 3rd ed., Friedberg et al.)

    How exactly is the induction hypothesis being used? I'm really missing something.
    Did you transcribe the proof correctly, or is there more to it ?

    It does not prove the theorem. The last sentence is an application of the inductive hypothesis, but it does not prove the theorem. To prove it you need to show that one of the can be replaced with

    The key is to show that any linearly independent set that does not span the space can be extended to a larger linearly independent by selecting an element rom a given spanning set.
    Reply With Quote  
     

  10. #9  
    Moderator Moderator AlexP's Avatar
    Join Date
    Jul 2006
    Location
    NY
    Posts
    1,838
    That is not the whole proof, hence the .... Should have made it clearer though. That's just up to the part I was stuck on. Maybe, however, I need to take the whole thing in at once. I'll take another look at it.
    "There is a kind of lazy pleasure in useless and out-of-the-way erudition." -Jorge Luis Borges
    Reply With Quote  
     

  11. #10  
    Moderator Moderator AlexP's Avatar
    Join Date
    Jul 2006
    Location
    NY
    Posts
    1,838
    Can someone confirm for me that this is correct? Thanks.

    For a fixed , determine the dimension of the subspace of (the vector space of all polynomials of degree or less with real coefficients) defined by .

    Is a basis for this subspace? If so, its dimension is then .
    "There is a kind of lazy pleasure in useless and out-of-the-way erudition." -Jorge Luis Borges
    Reply With Quote  
     

  12. #11  
    Forum Freshman Dani's Avatar
    Join Date
    Feb 2010
    Posts
    5
    About the Algebra books, I have used An introduction to linear Algebra by Kenneth Kuttler and in my opinion is a great book.
    Reply With Quote  
     

  13. #12  
    . DrRocket's Avatar
    Join Date
    Aug 2008
    Posts
    5,486
    Quote Originally Posted by AlexP
    Can someone confirm for me that this is correct? Thanks.

    For a fixed , determine the dimension of the subspace of (the vector space of all polynomials of degree or less with real coefficients) defined by .

    Is a basis for this subspace? If so, its dimension is then .
    See if you can come up with a proof of your statement. Start with the general form for a polynomial of degree at most n having "a" as a root.
    Reply With Quote  
     

  14. #13  
    Moderator Moderator AlexP's Avatar
    Join Date
    Jul 2006
    Location
    NY
    Posts
    1,838
    I should have included my reasoning.

    I believe it (let's call it ) is a basis because in general a polynomial in with a root at will have the formula . So then we have the linear combinations

    which yield

    , so we choose the 's to be the coefficients in the polynomial obtained by dividing some given polynomial with a as a root by , so spans .

    If then we obtain , then and so on, so that is linearly independent.

    If then obviously forms a basis, and this also has elements.

    So for every we have a basis with elements for , so it is dimension .

    Note: I wanted to post this tonight, but it's late and I'm tired and that's my excuse for anything I screwed up.
    "There is a kind of lazy pleasure in useless and out-of-the-way erudition." -Jorge Luis Borges
    Reply With Quote  
     

  15. #14  
    . DrRocket's Avatar
    Join Date
    Aug 2008
    Posts
    5,486
    Quote Originally Posted by AlexP
    I should have included my reasoning.

    I believe it (let's call it ) is a basis because in general a polynomial in with a root at will have the formula . So then we have the linear combinations

    which yield

    , so we choose the 's to be the coefficients in the polynomial obtained by dividing some given polynomial with a as a root by , so spans .

    If then we obtain , then and so on, so that is linearly independent.

    If then obviously forms a basis, and this also has elements.

    So for every we have a basis with elements for , so it is dimension .

    Note: I wanted to post this tonight, but it's late and I'm tired and that's my excuse for anything I screwed up.
    Right.
    Reply With Quote  
     

  16. #15  
    Moderator Moderator AlexP's Avatar
    Join Date
    Jul 2006
    Location
    NY
    Posts
    1,838
    As a hint in a problem in my linear book it says "Regard as a vector space over the field of rational numbers ."

    How is this possible? The only way that I can see is as a vector space over itself. Even if it had more than one basis vector, we can't obtain real numbers by taking linear combinations of rationals. I'm missing something.
    "There is a kind of lazy pleasure in useless and out-of-the-way erudition." -Jorge Luis Borges
    Reply With Quote  
     

  17. #16  
    . DrRocket's Avatar
    Join Date
    Aug 2008
    Posts
    5,486
    Quote Originally Posted by AlexP
    As a hint in a problem in my linear book it says "Regard as a vector space over the field of rational numbers ."

    How is this possible? The only way that I can see is as a vector space over itself. Even if it had more than one basis vector, we can't obtain real numbers by taking linear combinations of rationals. I'm missing something.
    is a vector space over You can add real nos. multiply any real by a rational and rational multiplication distributes over rational multiplication.


    is not a finite-dimensional -vector space. The existence of an algebraic basis (Hamel basis) requires the axiom of choice.
    Reply With Quote  
     

  18. #17  
    Moderator Moderator AlexP's Avatar
    Join Date
    Jul 2006
    Location
    NY
    Posts
    1,838
    I understand now. I failed to realize early on that a vector space need not have entries in the field it is over because that does seem to be the case most of the time.

    What, then, is a -vector space? A quick search didn't get me a straight definition.
    "There is a kind of lazy pleasure in useless and out-of-the-way erudition." -Jorge Luis Borges
    Reply With Quote  
     

  19. #18  
    . DrRocket's Avatar
    Join Date
    Aug 2008
    Posts
    5,486
    Quote Originally Posted by AlexP
    I understand now. I failed to realize early on that a vector space need not have entries in the field it is over because that does seem to be the case most of the time.

    What, then, is a -vector space? A quick search didn't get me a straight definition.
    A -vector space is a vector space over the field of rational numbers, .
    Reply With Quote  
     

  20. #19  
    Moderator Moderator AlexP's Avatar
    Join Date
    Jul 2006
    Location
    NY
    Posts
    1,838
    Oh, right. is not a finite-dimensional -vector space.
    "There is a kind of lazy pleasure in useless and out-of-the-way erudition." -Jorge Luis Borges
    Reply With Quote  
     

  21. #20  
    Moderator Moderator AlexP's Avatar
    Join Date
    Jul 2006
    Location
    NY
    Posts
    1,838
    I have run into a wall while trying to compute the matrix representations of the linear transformations , the trace of , and , the transpose of . For the former we have and for the second we have .

    I tried using the fact that the j-th column will be where in general is the matrix representation of in the basis , which is the basis of the codomain, but this leads me to answers that are wrong for dimensional reasons.

    Also, I'm not 'supposed to know' matrix multiplication yet, but given the number of rows/columns needed, I'm really stumped (does act by left-multiplication?).
    "There is a kind of lazy pleasure in useless and out-of-the-way erudition." -Jorge Luis Borges
    Reply With Quote  
     

  22. #21  
    . DrRocket's Avatar
    Join Date
    Aug 2008
    Posts
    5,486
    Quote Originally Posted by AlexP
    I have run into a wall while trying to compute the matrix representations of the linear transformations , the trace of , and , the transpose of . For the former we have and for the second we have .

    I tried using the fact that the j-th column will be where in general is the matrix representation of in the basis , which is the basis of the codomain, but this leads me to answers that are wrong for dimensional reasons.

    Also, I'm not 'supposed to know' matrix multiplication yet, but given the number of rows/columns needed, I'm really stumped (does act by left-multiplication?).
    You are working on a 4-dimensional vector space. The matrices in question will be 1x4 for the trace and 4x4 for the transpose and will depend on the choice of an ordered basis for all 2x2 matrices.

    I have no idea how you are supposed to find this representation without knowing about matrix multiplication and the correspondence between matrices and linear transformations given a choice of an ordered basis. In fact without that knowledge I don't see the point of the exercise.

    This is actually a rather strange problem.
    Reply With Quote  
     

  23. #22  
    Moderator Moderator AlexP's Avatar
    Join Date
    Jul 2006
    Location
    NY
    Posts
    1,838
    In 2.2 they establish the correspondence between linear transformations and matrices, and in 2.3 matrix multiplication arises from the discussion of matrix representations of compositions of linear transformations.

    Does one often compute matrix representations much in practice anyway? Is it most important just to understand the corresponce between linear transformations and matrices? If so, I won't worry about these problems as much.

    EDIT: I should say, these problems were from 2.2. The correspondence has been established but not how a matrix/linear transformation acts on vectors...I had to assume by matrix multiplication but that now appears not to be the case. That's what messed me up - I could easily come up with that 4x4 matrix but it didn't make sense.
    "There is a kind of lazy pleasure in useless and out-of-the-way erudition." -Jorge Luis Borges
    Reply With Quote  
     

  24. #23  
    Moderator Moderator AlexP's Avatar
    Join Date
    Jul 2006
    Location
    NY
    Posts
    1,838
    If we have , then the rank of T is at most , but then is the rank of at most the rank of ? So then as we reiterate composition of with itself it could eventually go to the 0-map.

    Edited for correctness.
    "There is a kind of lazy pleasure in useless and out-of-the-way erudition." -Jorge Luis Borges
    Reply With Quote  
     

  25. #24  
    . DrRocket's Avatar
    Join Date
    Aug 2008
    Posts
    5,486
    Quote Originally Posted by AlexP
    In 2.2 they establish the correspondence between linear transformations and matrices, and in 2.3 matrix multiplication arises from the discussion of matrix representations of compositions of linear transformations.

    Does one often compute matrix representations much in practice anyway? Is it most important just to understand the corresponce between linear transformations and matrices? If so, I won't worry about these problems as much.

    EDIT: I should say, these problems were from 2.2. The correspondence has been established but not how a matrix/linear transformation acts on vectors...I had to assume by matrix multiplication but that now appears not to be the case. That's what messed me up - I could easily come up with that 4x4 matrix but it didn't make sense.
    It is quite common to have to figure out the matrix representation for a linear transformation.

    As a exercise you might find the 2x2 matrix for rotation through an angle in the plane.

    Matrices operate on colum vectors by left multiplication.

    How can you possibly establish the correspondence between matrices and linear transformations without first establishing how matrices act on vectors ?

    The difficulty with your problem is that the 2x2 matrices are elements in a 4-dimensional vector space and the trace and transpose are linear functions on that vector space.
    Reply With Quote  
     

Bookmarks
Bookmarks
Posting Permissions
  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •