Notices
Page 1 of 2 12 LastLast
Results 1 to 100 of 166

Thread: Manifolds

  1. #1 Manifolds 
    Moderator Moderator
    Join Date
    Jun 2005
    Posts
    1,612
    Well, well, I have been remiss! I promised something on manifolds and failed to deliver.

    Let's have a kindergarten definition: a manifold is a set of points to which I can assign some notion of "shape". Examples: the "shape of the manifold is that of a line - it's called the "real line" for this reason. The "shape" of the manifold is that of a plane.

    The "shape" of the manifold is that of the 1-sphere aka. circle, the "shape of the manifold is that of the 2-sphere, aka. sphere.

    We can allow ourselves to think of the "points" I referred to as numbers, but it is probably best if we don't - better is we just think of them as abstract little buggers with zero dimension.

    The essential point about manifolds is this: if you stand up real close (taking off your specs), they are indistinguishable from some Euclidean space . So, take the manifold . Obviously Euclidean geometry doesn't apply globally (sum of internal angles of a triangle here, say, is not 180 - it's not flat in the Euclidean sense), but, by taking a sufficiently small "area" of the sphere, we may say that it is locally Euclidean.

    (This makes sense - it is, after all, why we thought that Earth was flat for so long).

    So, we are grown-ups, and we want a grown-up definition. Here goes (you may want to check our topology thread to get some of these terms):

    an -manifold is a topological space such that, for any point , and some neighbourhood that there is a homeomorphism .

    Recall that "homeomorphism" is just the topological version of isomorphism.

    Anyhoo - the nice thing, or rather things, about this is that, since is the only space where we know how to do calculus, then we also know how to do local calculus on , that is in .

    We also know that any admits of a set of Cartesian coordinates, say , which is "inherited", under the homeomorphism , by the neighourhood , that is .

    This is already over-long.

    Let's accept (though it's easily shown by the axioms of our theory) that, for any the neighbourhood is not unique.

    Thus , say. But we have that .

    This where the fun begins - stay tuned!


    Reply With Quote  
     

  2.  
     

  3. #2  
    Forum Ph.D.
    Join Date
    Apr 2008
    Posts
    956
    Ooh, this should be exciting! :-D Iím afraid I got lost in the functor threads but topology is more of my area of exploration; I should be better equipped to follow this discussion.

    Lead on, maestro!


    Reply With Quote  
     

  4. #3 Re: Manifolds 
    Moderator Moderator AlexP's Avatar
    Join Date
    Jul 2006
    Location
    NY
    Posts
    1,838
    Quote Originally Posted by Guitarist
    We also know that any admits of a set of Cartesian coordinates, say , which is "inherited", under the homeomorphism , by the neighourhood , that is .
    I have a question that I think comes down to my understanding or lack thereof of homeomorphisms... You say , but is it equally valid to say ?

    EDIT:

    I believe this is where I misunderstood, correct me if I'm wrong... Is the you referred to in the above quote not the same as the one in ? Is it actually the inverse of ? In other words, I'm thinking it's not the map , but instead the map . Am I correct?
    "There is a kind of lazy pleasure in useless and out-of-the-way erudition." -Jorge Luis Borges
    Reply With Quote  
     

  5. #4  
    Moderator Moderator
    Join Date
    Jun 2005
    Posts
    1,612
    OK, Chemboy, good catch! I mis-spoke myself. Before I correct myself, let me say a few words about function notation; we will be using it a lot, so it's as well to be clear.

    The standard notation is, not to put too fine a point on it, a bag of shite - but it is standard, so we have to live with it.

    Given some arbitrary function , they say that is the image of under . If this function is "well-behaved", the image is unique - nice functions may not spray themselves all over the codomain .

    The notation means the pre-image of - it does NOT in general mean it is an inverse.

    Example: suppose . Then . But , in other words, we assume that in general, the pre-image of a point in the codomain is a set in the domain.

    In the case that is an isomorphism (in our case a homeomorphism) the pre-image is, by definition, unique, and we may think of as its inverse.

    So: when I first introduced the homeomorphism , I did so as a mapping . But, next go around, I said that , which implies , which is, of course terribly misleading. I want the first to be true, so, by the above, you should read this as .

    Sorry for any confusion. Going to work now, maybe more later......
    Reply With Quote  
     

  6. #5  
    Moderator Moderator
    Join Date
    Jun 2005
    Posts
    1,612
    Anyway, after that little hiccup, let's do this.

    Suppose that are neighbourhoods of , with .

    Let be homeomorphisms.

    Then evidently can be described by the coordinate functions on or those on . For sanity to prevail, therefore, we must have a way of relating these coordinates to each other. The easiest way to see this is as follows;

    Note first that are open sets by definition. Notice also that since a homeomorphism is a continuous (invertible) bijection, then the images will be open in . Now let denote the point in such that .

    Let , and we will have the composite map (right-to-left, recall) , specifically the homeomorphism (with inverse), which defines the coordinate transformation .

    Notice that each in the -tuple is just a real number, likewise for the tuple . This implies there are real numbers such that , and that each is completely determined by some fixed , otherwise is completely determined by some fixed .

    It is therefore customary to exchange these numbers and write the transformation law on as , with the obvious inverse .

    Umm.... I fear I may not have explained this very clearly (in fact I can feel river_rat's fingers closing round my throat). Shoot me down, anybody........
    Reply With Quote  
     

  7. #6  
    Moderator Moderator AlexP's Avatar
    Join Date
    Jul 2006
    Location
    NY
    Posts
    1,838
    Quote Originally Posted by Guitarist
    Thus , say. But we have that .
    It's true that , right?

    EDIT:

    After reading through the above post, which I had not done prior to posting this, I think it's pretty obvious that what I said above is true...
    "There is a kind of lazy pleasure in useless and out-of-the-way erudition." -Jorge Luis Borges
    Reply With Quote  
     

  8. #7  
    Moderator Moderator AlexP's Avatar
    Join Date
    Jul 2006
    Location
    NY
    Posts
    1,838
    Ok, now for the newest material...

    when you say , are you just multiplying and or is that function notation, like it seems to be down below in ? Is there one for each or is there just one that's applied to the entire tuple? If I had to guess I'd go with the latter, but I want to make sure. I guess I need a little clarification here. The concept behind it is perfectly clear, I'm just having notational issues...
    "There is a kind of lazy pleasure in useless and out-of-the-way erudition." -Jorge Luis Borges
    Reply With Quote  
     

  9. #8  
    Moderator Moderator
    Join Date
    Jun 2005
    Posts
    1,612
    Quote Originally Posted by Chemboy
    when you say , are you just multiplying and or is that function notation,
    Yes, the latter, well spotted! I was going to point that out next
    Is there one for each or is there just one that's applied to the entire tuple? If I had to guess I'd go with the latter, .
    However, in this case you'd be wrong - in fact your first suggestion is correct. let's write that out in full, remembering that we are substituting :



    ................................................


    And as you say, the here are to be seen as different functions in the variables , likewise for the inverse transformation.

    As a consequence, provided that our manifold admits of derivatives - let it! - we may cast this as , and for the inverse , just as we would for any function in variables.

    Keep this in mind, we will find it useful later.
    Reply With Quote  
     

  10. #9  
    Moderator Moderator
    Join Date
    Jun 2005
    Posts
    1,612
    Let me now quickly show you a useful trick, and then Mrs. G. insists I do something "useful" - the heart sinks.....

    Recall our coordinate transformation laws:

    , with the inverse
    .

    We can make the obvious substitutions of one into the other such that


    , or if you prefer it in full,


    .................................................. ............

    and likewise for the inverse.

    Then, again assuming our manifold is differentiable, we may differentiate with respect to ,say, apply the chain rule of calculus and find that, for, say, the -th such identity on in the above list, that





    But now notice that the variables (coordinate functions, mind) , are independent, by definition, that is, for , that no can be expressed in terms of , we may say that may say that , where .

    Then .

    Be very sure you understand this construction, as it is the key to what follows. In particular, make sure you understand how we are peppering indices all over the place.

    Now it seems I have some shelves to put up - O joy......
    Reply With Quote  
     

  11. #10  
    Moderator Moderator AlexP's Avatar
    Join Date
    Jul 2006
    Location
    NY
    Posts
    1,838
    Quote Originally Posted by Guitarist
    Notice that each in the -tuple is just a real number, likewise for the tuple . This implies there are real numbers such that , and that each is completely determined by some fixed , otherwise is completely determined by some fixed .

    It is therefore customary to exchange these numbers and write the transformation law on as , with the obvious inverse .
    When you say , is a or a ? Beyond that I don't really have any specific questions, but my understanding is definitely hurting in this (quoted) area. I get the basic idea, but not the specifics, when it comes to notation and what exactly and are, etc.
    "There is a kind of lazy pleasure in useless and out-of-the-way erudition." -Jorge Luis Borges
    Reply With Quote  
     

  12. #11 Re: Manifolds 
    . DrRocket's Avatar
    Join Date
    Aug 2008
    Posts
    5,486
    Quote Originally Posted by Guitarist
    Well, well, I have been remiss! I promised something on manifolds and failed to deliver.

    Let's have a kindergarten definition: a manifold is a set of points to which I can assign some notion of "shape". Examples: the "shape of the manifold is that of a line - it's called the "real line" for this reason. The "shape" of the manifold is that of a plane.

    The "shape" of the manifold is that of the 1-sphere aka. circle, the "shape of the manifold is that of the 2-sphere, aka. sphere.

    We can allow ourselves to think of the "points" I referred to as numbers, but it is probably best if we don't - better is we just think of them as abstract little buggers with zero dimension.

    The essential point about manifolds is this: if you stand up real close (taking off your specs), they are indistinguishable from some Euclidean space . So, take the manifold . Obviously Euclidean geometry doesn't apply globally (sum of internal angles of a triangle here, say, is not 180 - it's not flat in the Euclidean sense), but, by taking a sufficiently small "area" of the sphere, we may say that it is locally Euclidean.

    (This makes sense - it is, after all, why we thought that Earth was flat for so long).

    So, we are grown-ups, and we want a grown-up definition. Here goes (you may want to check our topology thread to get some of these terms):

    an -manifold is a topological space such that, for any point , and some neighbourhood that there is a homeomorphism .

    Recall that "homeomorphism" is just the topological version of isomorphism.

    Anyhoo - the nice thing, or rather things, about this is that, since is the only space where we know how to do calculus, then we also know how to do local calculus on , that is in .

    We also know that any admits of a set of Cartesian coordinates, say , which is "inherited", under the homeomorphism , by the neighourhood , that is .

    This is already over-long.

    Let's accept (though it's easily shown by the axioms of our theory) that, for any the neighbourhood is not unique.

    Thus , say. But we have that .

    This where the fun begins - stay tuned!
    Question: Is it your intent to talk about topological manifolds or differentiable manifolds ? In my normal way of thinking you need to have the machinery of geometry (differential structure) as well as of topology in order to talk about shape.

    Question: What is a topologist ?
    Answer: Someone who can't tell the difference between a donut and a coffee cup.
    Reply With Quote  
     

  13. #12  
    Moderator Moderator
    Join Date
    Jun 2005
    Posts
    1,612
    Yeah, we will be talking differentiable manifolds; as your joke suggests, although we may think about a topological manifold as have something like a "shape", it is not fixed and not easy to define - I image the best we could do is ask about connectedness and compactness, so you're right, shape was a loose word to use.

    As it happens, I was about to launch into that. But let me first answer Chemboy's question.

    The superscripts I am using are any and all of the natural numbers
    1,2,....,n.

    Note in the following
    Quote Originally Posted by Guitarist


    ................................................

    that the is called an n-tuple, for which I use the notation .

    Note also that is a single element in an n-tuple, since reading down this list on the LHS we see that is an n-tuple. So that reading down the list on the RHS we have an n-tuple given by the equalities shown which I wrote as .

    So writing this all out in full, I will have

    Which do you prefer?

    Recall, as Chemboy spotted early on, the are acting as distinct functions in variables: . Such a function that is continuous and differentiable is said to be of class . A function that has derivatives up to and including -th order is said to be of class . Note this implies that a function is of necessity a function. Note also that the domain of a function is, in general, smaller than that of a function.

    A function that is differentiable as many times as we please is called a or smooth function. We are going to insist that is smooth in this sense.

    Remembering this defines a transition function on , one says that are compatible.

    Finally, if our manifold can be completely covered by a collection (an "atlas") of compatible charts, it is said to be a smooth, or manifold.

    These are the sorts of manifolds we shall be most interested in, as they have a considerably richer structure than those that are merely topological manifolds (although a smooth manifold is of necessity also a topological manifold - not vice versa).

    With this under our belt, we can now define vector spaces on our manifold; this will allow us to define a certain sort of metric, and thus length and angle and thus do some geometry. Yay!"

    But you'll have had enough of new definitions for now.....
    Reply With Quote  
     

  14. #13 Re: Manifolds 
    . DrRocket's Avatar
    Join Date
    Aug 2008
    Posts
    5,486
    Quote Originally Posted by Guitarist
    Well, well, I have been remiss! I promised something on manifolds and failed to deliver.

    Let's have a kindergarten definition: a manifold is a set of points to which I can assign some notion of "shape". Examples: the "shape of the manifold is that of a line - it's called the "real line" for this reason. The "shape" of the manifold is that of a plane.

    The "shape" of the manifold is that of the 1-sphere aka. circle, the "shape of the manifold is that of the 2-sphere, aka. sphere.

    We can allow ourselves to think of the "points" I referred to as numbers, but it is probably best if we don't - better is we just think of them as abstract little buggers with zero dimension.

    The essential point about manifolds is this: if you stand up real close (taking off your specs), they are indistinguishable from some Euclidean space . So, take the manifold . Obviously Euclidean geometry doesn't apply globally (sum of internal angles of a triangle here, say, is not 180 - it's not flat in the Euclidean sense), but, by taking a sufficiently small "area" of the sphere, we may say that it is locally Euclidean.

    (This makes sense - it is, after all, why we thought that Earth was flat for so long).

    So, we are grown-ups, and we want a grown-up definition. Here goes (you may want to check our topology thread to get some of these terms):

    an -manifold is a topological space such that, for any point , and some neighbourhood that there is a homeomorphism .

    Recall that "homeomorphism" is just the topological version of isomorphism.

    Anyhoo - the nice thing, or rather things, about this is that, since is the only space where we know how to do calculus, then we also know how to do local calculus on , that is in .

    We also know that any admits of a set of Cartesian coordinates, say , which is "inherited", under the homeomorphism , by the neighourhood , that is .

    This is already over-long.

    Let's accept (though it's easily shown by the axioms of our theory) that, for any the neighbourhood is not unique.

    Thus , say. But we have that .

    This where the fun begins - stay tuned!
    \

    Generally somewhere in here there is the assumption in the definition that the manifold is paracompact and Hausdorff. Without those assumptions you can get some pathalogical examples with "branching" behavior that is a problem.
    Reply With Quote  
     

  15. #14  
    Moderator Moderator AlexP's Avatar
    Join Date
    Jul 2006
    Location
    NY
    Posts
    1,838
    Ok, I'm good with it now except for one thing... It appears as if each separate element of is a function of every element of . I'm finding myself unable to wrap my head around why exactly this is.
    "There is a kind of lazy pleasure in useless and out-of-the-way erudition." -Jorge Luis Borges
    Reply With Quote  
     

  16. #15  
    Moderator Moderator
    Join Date
    Jun 2005
    Posts
    1,612
    Chemboy: I don't really see what the difficulty is here. OK, let's try this, with all superfluity stripped away.

    Let our manifold be and let Since a point in takes the form , let .

    Define such that, for we have that .

    Now we know that so we may assume that . Call this subset .

    Then .

    But since uniquely determine each other, likewise we may set and arrive at the following.

    and therefore

    Making the obvious substitutions, do you find this any better?

    DrRocket: It's a matter of taste I guess, but, while I agree completely that failure to insist on the Hausdorff property and paracompactness may result in some pretty beastly manifolds, I would prefer not to include these in a general definition, rather to include them in a list of desiderata.

    Like I say, I believe it's a matter of taste.
    Reply With Quote  
     

  17. #16  
    Moderator Moderator
    Join Date
    Jun 2005
    Posts
    1,612
    OK, in spite of an apparent waning of interest in this subject, I will forge ahead (for now, at least), as it starts to get interesting, to put it mildly.

    We have a manifold and we now want to define some vectors on it. This raises a couple of problems; if we allow ourselves naively to think of a vector as being a mathematical object that has both direction and magnitude, what do we mean by "magnitude" here? Or direction, for that matter? Our manifold has so far offered us no way of specifying these! . One says there is no metric (roughly speaking, a metric gives us a way of defining length and angle)

    So, unless our enterprise is doomed from the outset, we will need a rather special kind of vector, the search for which I rather loosely motivate as follows:

    Recall our manifold has a differentiable structure. Recall also from Lesson 1 in school calculus that differentiation measures the instantaneous rate of change of one quantity with respect to another - we were taught to write, say, for this.

    We were also taught that this strongly implies that , so we could equally write , and that was to be called a "differential operator".

    We were further told that "measures" the slope of the tangent to the graph of at the point . This then is going to be our "magnitude", and thus that the vectors we seek are actually the slopes of tangents to some curve that is the graph of some function.

    So, let's get a little more formal.

    Let be a neighbourhood of the point . Denote by the set of all smooth functions , the real numbers.

    Now define the operator to be the tangent vector at the point such that .

    But, since the point , then for each i = 1, 2,....,n we will arrive at is a tangent vector on , evaluated at the point . Since is a coordinate function on , i.e. the are independent,, I may call each of these a basis vector in a vector space provided only the vector space axioms are satisfied.

    At this stage, in order to simplify my life, I am obliged to introduce an ugly but quite standard notational convention: for any fixed coordinates I will write .

    Our vector space at the point will be defined when, for all , the following identities hold;

    a) this is linearity;

    b) . This is the Law of Leibniz, or a derivation note I have edited out the centre dots, as they were misleading

    From this we see that any vector in the vector space, call it of tangent vectors at can be expressed in the form , and that the can indeed be thought of as a basis for .
    Reply With Quote  
     

  18. #17  
    Moderator Moderator AlexP's Avatar
    Join Date
    Jul 2006
    Location
    NY
    Posts
    1,838
    There's been no waning interest on my part, I've just been trying to get some things down, which I think I have now. I'm going to do a thorough read-through next chance I get and see if I have questions, then I'll be good to go.
    "There is a kind of lazy pleasure in useless and out-of-the-way erudition." -Jorge Luis Borges
    Reply With Quote  
     

  19. #18  
    Moderator Moderator
    Join Date
    Jun 2005
    Posts
    1,612
    OK, Chemboy, just let us know when you're ready. Just now, I am going out for a few drinks (traditional in the UK on Friday night)
    Reply With Quote  
     

  20. #19  
    Moderator Moderator AlexP's Avatar
    Join Date
    Jul 2006
    Location
    NY
    Posts
    1,838
    ok, I'm truly sorry that I'm holding things up and am still this far back, but I'm going to do the right thing and not hope that I understand, but make sure I actually do understand. I don't doubt that this is simple, it's just not clicking with me for some reason. When it does I'm sure it'll be one of those "oh, duh" kinds of things, but til then I need help. I swear I'm really not that unintelligent, and I'm sure I'm better equipped than some to understand this, haha. This is just one bad spot. We all have them sometimes (I think).

    So is an n-tuple, . So if we're in , could we have a ? My other interpretation is that since we have an index () on , that when we say , we're not saying that the entire n-tuple, parentheses and all, is equivalent to , but that each of its elements, , are equivalent to , depending on the value of . In my mind, there's a mixing of these two ideas throughout the work. So I need to get it straight. I know that all probably sounds stupid to someone for whom it's elementary, but that's the way I need to do it.

    Can we look at as each being a function of because of this: since takes in a set of Cartesian coordinates, turns it into the corresponding , then turns into the set of Cartesian coordinates inherited under the other homeomorphism, we have a unique set of Cartesian coordinates under each homeomorphism, and each of its coordinates, or are dependent directly on . SO, since there's a unique , and there's a unique , they're functions of each other...they're just "going through" to get there? I hope that was at least somewhat coherent. It was to me.

    Now, please be patient with me, and take out your red pen and make giant Xs and "no"s and beat it over my head, because I want to understand this and I know I can.
    "There is a kind of lazy pleasure in useless and out-of-the-way erudition." -Jorge Luis Borges
    Reply With Quote  
     

  21. #20  
    Moderator Moderator
    Join Date
    Jun 2005
    Posts
    1,612
    Quote Originally Posted by Chemboy
    ok, I'm truly sorry that I'm holding things up and am still this far back, but I'm going to do the right thing and not hope that I understand, but make sure I actually do understand
    Don't be sorry, you absolutely are doing the right thing. I am only sorry I have perhaps not explained it very well. It's just that this index notation is so familiar to me, I may have been a bit sloppy in its use; I don't think so, but it's possible.

    So is an n-tuple, . So if we're in , could we have a ?
    Aaah. Hold it right there, this may be part of your problem.

    First, an tuple is ordered. So in we know that the point, say, - they are clearly different points. So, if we let take on any and all of the values 1, 2, ...,n, these values are to thought of as ordinals. That is, when we mean the first , when we mean the last , and we may specify the th as being an arbitrarily chosen (when this choice is truly arbitrary.

    Second, it follows from this that your suggestion that is a point in is mistaken - when when , and so on

    My other interpretation is that since we have an index () on , that when we say , we're not saying that the entire n-tuple, parentheses and all, is equivalent to , but that EACH of its elements, , are equivalent to {SOME} , depending on the value of .
    Yes this is exactly right, but notice I have taken the liberty of editing your quote; that's how you should think of it. One says that "k runs over 1, 2, 3, ....,n"

    Can we look at as each being a function of because of this: since takes in a set of Cartesian coordinates, turns it into the corresponding , then turns into the set of Cartesian coordinates inherited under the other homeomorphism, we have a unique set of Cartesian coordinates under each homeomorphism, and each of its coordinates, or are dependent directly on . SO, since there's a unique , and there's a unique , they're functions of each other...they're just "going through" to get there? I hope that was at least somewhat coherent. It was to me. :)
    Yes it was coherent, and I applaud your attention to detail. I have to hold my hands up and admit (in this case) to being very sloppy - mitigation follows below.

    You are right. We have a homeomorphism from some open set in to some open (image) set in with Cartesian coordinates. Then, since is, by definition, invertible, we will have that, for the point, say, that there must be a point such that .

    Then, iff , the homeomorphism sends this point back to , there must be an image point, say, , so that the composite .

    So, I sort of skated over these details, and simply asserted that and referred to 2 "choices" of coordinate functions for this point (so long as ) which are related by the transition formulae I gave.

    I beg your pardon for this sloppiness, and crave leniency on the grounds that, by the definition of our manifold, is indistinguishable from likewise and
    Reply With Quote  
     

  22. #21  
    Moderator Moderator AlexP's Avatar
    Join Date
    Jul 2006
    Location
    NY
    Posts
    1,838
    ok, I'm definitely better now. When we say , aren't we treating the as an tuple? And the same for the in ?

    EDIT: I've had a new thought... we're not treating the as an tuple, but since we're not specifying a value for to take it runs over the values through , and so becomes a function of .

    ___

    And from what you said, I guess there's really a sense of one-ness between our manifold and the it's homeomorphic to? So you said and ...so the sets of Cartesian coordinates are ...it's just that now that we've given coordinates we can do geometry and such on the manifold, or something like that?
    "There is a kind of lazy pleasure in useless and out-of-the-way erudition." -Jorge Luis Borges
    Reply With Quote  
     

  23. #22  
    Moderator Moderator
    Join Date
    Jun 2005
    Posts
    1,612
    Quote Originally Posted by Chemboy
    EDIT: I've had a new thought... we're not treating the as an tuple, but since we're not specifying a value for to take it runs over the values through , and so becomes a function of .
    Well, I thought you had it, so I rather wish you hadn't had your new thought!

    First - we are treating the as a tuple. That was our definition: .

    Second, you have it exactly backwards - you seem to be implying that the tuple is a function of the separately. This is not so.

    Let me remind you that, if I say "y is a function of x", I mean precisely that , right? So if I say that "a is a function of (x, y)" I mean precisely that . You appear to implying that separately, which is clearly nonsense.

    It's possible you mis-spoke, or that I misunderstood your meaning, but, since I have run out of simple examples, let me say again, as clearly as I can: given the tuple and the tuple , then, if there is to be a transition function between them, we must have that each is individually a function of the entire tuple etc..

    We could write for this condition, but for the reasons I gave, we may just as well use
    ___

    I guess there's really a sense of one-ness between our manifold and the it's homeomorphic to? So you said and ...so the sets of Cartesian coordinates are ...it's just that now that we've given coordinates we can do geometry and such on the manifold, or something like that?
    See, I thought, on reading this in your pre-edited post, that you had it, barring a use of language which I forgave you due to inexperience in this area.

    Anyway, lemme know when you are ready to move on.....
    Reply With Quote  
     

  24. #23  
    Moderator Moderator AlexP's Avatar
    Join Date
    Jul 2006
    Location
    NY
    Posts
    1,838
    ok, I'm good to go. And yes, I'm sure.
    "There is a kind of lazy pleasure in useless and out-of-the-way erudition." -Jorge Luis Borges
    Reply With Quote  
     

  25. #24  
    Moderator Moderator
    Join Date
    Jun 2005
    Posts
    1,612
    Well OK, good. I am assuming you have read and understood my post of October 2nd

    Since you were having trouble with my index notation, I am going to explain this carefully to ensure I am as clear as possible

    In general terms, we will write a 2-vector as , where the "a" are (not necessarily distinct) scalars, and the "e" are called basis vectors. You may take this as a definition.

    The general form of this equality is accordingly written , which means that, whenever the index on "a" matches the index on "e", we multiply them and then add all these products together.

    Alternatively, we can multiply all e's by all a's, but only include those with matching indices in the sum - this, strictly speaking is what the notation is saying.

    The vectors we are talking about in the present context are called tangent vectors, which I wrote, according the the above convention as , where again the alphas are scalar, and is the element in the coordinate tuple defined on some neighbourhood . Note the elementary fact the no basis vector can be written in this form; this called "linear independence".

    OK so far? Notice that the basis vectors are dependent on the coordinates, and are more fully referred to as "coordinate basis vectors". We can think of these as being the tangents to the coordinate "lines" that pass through the point .

    The ensemble of all vectors at is called a tangent vector space and is written .

    Now suppose, as before, that , and that are coordinates for and are coordinates for . Then, as before, we can think of there being two different sets of coordinate lines passing through .

    The following is crucial; and in general (though this may occur accidentally).

    But we will insist that it must always be the case that, for defined on there is some set of scalars such that where the latter are defined on .

    So, from this equality, we must assume there is a transformation where the latter are defined on . This will be our next task, but first make sure your understand what I'm talking about (incidentally, I wouldn't be doing this if I were unwillingly to help you do so)
    Reply With Quote  
     

  26. #25  
    Moderator Moderator AlexP's Avatar
    Join Date
    Jul 2006
    Location
    NY
    Posts
    1,838
    Quote Originally Posted by Guitarist
    Let be a neighbourhood of the point . Denote by the set of all smooth functions , the real numbers.
    Where are these functions from? Are they our homeomorphisms?

    Other than that, I'm fine...
    "There is a kind of lazy pleasure in useless and out-of-the-way erudition." -Jorge Luis Borges
    Reply With Quote  
     

  27. #26  
    Moderator Moderator
    Join Date
    Jun 2005
    Posts
    1,612
    Quote Originally Posted by Chemboy
    Where are these functions {} from? Are they our homeomorphisms?
    Usually not. Recall our homeomorphisms are mappings ("isomorphisms") . The dimension of the codomain determines the dimension of

    The mappings in my quote will be homeomorphisms only when [insert answer here; try to figure what would be the consequence for our tangent vectors in this case].

    Anyway, let me proceed with the utmost caution, as it starts to get a little tricky, at least for me (I tell you frankly I always hated calculus, consequently I am rather poor at it. Holler if I make mistakes).

    I said this
    Quote Originally Posted by Guitarist
    and
    I will invite you to stare at these inequalities for as long as it takes for you to see they imply;

    a) for any fixed basis, a vector is uniquely determined by the set of multiplicative scalars that act upon it (these scalars are called the components of the vector, btw), and

    b) for any fixed set of multiplicative scalars, a vector is uniquely determined by the basis they act upon.

    Having done that (hopefully), stare at this
    Quote Originally Posted by Guitarist
    .
    This I do not expect you to see as obvious. So I'll tell you.

    This equality implies that the vectors, tangent to a manifold at a point must be regarded as "real" geometric objects, and one can argue that, given some fixed that this choice of vector uniquely determines the choice of components (scalars) relative some choice of basis.

    These are, in some sense, opposing points of view, and I propose to adopt whichever suits my purpose at the moment. But will pause to makes you're still with me.

    Do ask, though, as I (for one) find this part tough going!
    Reply With Quote  
     

  28. #27  
    Moderator Moderator AlexP's Avatar
    Join Date
    Jul 2006
    Location
    NY
    Posts
    1,838
    Quote Originally Posted by Guitarist
    The mappings in my quote will be homeomorphisms only when [insert answer here; try to figure what would be the consequence for our tangent vectors in this case].
    The mappings will be homeomorphisms only when they're on a -manifold, since they're mapping to ? And this would cause our tangent vectors to be scalars.

    I really think I did fine on that last part. I struggled so much with the simple stuff that maybe ironically I'll find the complicated things easy.
    "There is a kind of lazy pleasure in useless and out-of-the-way erudition." -Jorge Luis Borges
    Reply With Quote  
     

  29. #28  
    Moderator Moderator
    Join Date
    Jun 2005
    Posts
    1,612
    Quote Originally Posted by Chemboy
    The mappings will be homeomorphisms only when they're on a
    -manifold, since they're mapping to ?
    Yes, good.
    And this would cause our tangent vectors to be scalars.
    Aaah, yes and no. It isn't quite the answer I was expecting; the tangent vectors and garden-variety vectors coincide on .

    But it leads nicely to the next point I wanted to make From the point of pure, some might say, excessive, pedantry, it is a matter of mental hygiene to distinguish between the real numbers as a set, as a field, as a vector space, as a topological
    space, as a manifold,.....

    Let's use the following notation, and consider as a field element, and as a vector space element. An obvious choice of basis for the vector space is .

    The axioms on insist that which implies .

    WOW, lookame!! No hands!!

    In this context, given the coincidence between tangent spaces and vanilla spaces I referred to above, and taking the coordinates in the neighbourhood of (the obvious manifold) to be , it is easy to suppose that .

    The generalization, however, is startling. Recall we defined a tangent vector on a manifold as , where I am using the same notation as before.

    From the above I will claim that the are uniquely determined by where are the -th element in their respective tuples, with the reasonable assumption that .

    This did my head in when I first saw it - try plugging this equality into the one above it! You may well see it instantly, but it took me forever. Good luck!

    Gotta run now. Later.
    Reply With Quote  
     

  30. #29  
    Moderator Moderator AlexP's Avatar
    Join Date
    Jul 2006
    Location
    NY
    Posts
    1,838
    ok, I get that the tangent vectors aren't scalars, should've realized that.

    Still working on the thing. Haven't put a ton of thought into it yet, so I'll keep working on it. Any chance you could give me whatever helped you get it?

    Also, I may be on again tonight, but after that I'll be away and won't have internet access until Monday night.
    "There is a kind of lazy pleasure in useless and out-of-the-way erudition." -Jorge Luis Borges
    Reply With Quote  
     

  31. #30  
    Moderator Moderator
    Join Date
    Jun 2005
    Posts
    1,612
    Hi Chemboy, good weekend? Glorious weather here, less so now.

    As for hints; I used as an intuititional guide a result from elementary linear algebra that goes like this.

    Suppose is a vector space with inner products. Let the set denote the basis vectors. Then any can be expressed as , where the are scalar.

    Now the inner product of an arbitrary vector with any basis vector will be denoted by (since inner products are bilinear) hence where the Kronecker delta is 1 when i = j, and zero otherwise.

    Obviously, in the present case we don't have an inner product, but we can see how it might work.

    In particular note that, if then .

    This is NOT a proof, just a guide. The proper proof is tedious, in fact I freely confess I don't really understand it. Let's just do what most folk do - swallow it whole and move on. In due course we will see that it must be so.
    Reply With Quote  
     

  32. #31  
    . DrRocket's Avatar
    Join Date
    Aug 2008
    Posts
    5,486
    Quote Originally Posted by Chemboy
    ok, I get that the tangent vectors aren't scalars, should've realized that.

    Still working on the thing. Haven't put a ton of thought into it yet, so I'll keep working on it. Any chance you could give me whatever helped you get it?

    Also, I may be on again tonight, but after that I'll be away and won't have internet access until Monday night.
    While admittedly I haven't been following the notational intricacies and foibles of specific coordinate patch systems, you might think of it this way.

    Tangent vectors are differential operators, and correspond to the notion of directional derivatives in Euclidean space. The correspondence is roughly that tangent vectors provide the direction along which to take the directional derivative. A common approach in the case of infinitely differentiable structures is to define tangent vectors as derivations on the sheaf of germs of infinitely differentiable functions at a point. You then show that the dimension is n and that things proceed as you would intuitively expect (see Frank Warner's book Introduction to Differentiable Manifolds or Hu's book Differential Manifolds for instance).
    Reply With Quote  
     

  33. #32  
    Moderator Moderator
    Join Date
    Jun 2005
    Posts
    1,612
    Quote Originally Posted by DrRocket
    A common approach in the case of infinitely differentiable structures is to define tangent vectors as derivations on the sheaf of germs of infinitely differentiable functions at a point.
    Well yes, I had planned to say something along these lines at some point soon. Plus I have knowingly skated over some subtleties to which I had intended to return, once we we had the necessary machinery in place.

    But it seems that Chemboy (who after all was the main target of my ramblings - no-one needs to know why) has gone to ground.

    In fact, not for the first time on this forum, I am losing heart.

    Correction - I have lost heart
    Reply With Quote  
     

  34. #33  
    Moderator Moderator AlexP's Avatar
    Join Date
    Jul 2006
    Location
    NY
    Posts
    1,838
    Quote Originally Posted by Guitarist
    Quote Originally Posted by DrRocket
    A common approach in the case of infinitely differentiable structures is to define tangent vectors as derivations on the sheaf of germs of infinitely differentiable functions at a point.
    Well yes, I had planned to say something along these lines at some point soon. Plus I have knowingly skated over some subtleties to which I had intended to return, once we we had the necessary machinery in place.

    But it seems that Chemboy (who after all was the main target of my ramblings - no-one needs to know why) has gone to ground.

    In fact, not for the first time on this forum, I am losing heart.

    Correction - I have lost heart
    If only you waited about half an hour longer to check the thread! I'm still here, but I've been really really (emphasis on the really) busy with school work this week and haven't been able to devote time to this. I'm back though and I truly hope you'll continue.

    Quote Originally Posted by Guitarist
    and taking the coordinates in the neighbourhood of (the obvious manifold) to be , it is easy to suppose that
    I'm confused on this part... I don't understand the 'taking the coordinates to be ' part.

    Quote Originally Posted by Guitarist
    Let's just do what most folk do - swallow it whole and move on. In due course we will see that it must be so.
    I'm going to go with that option for now, but I'll keep working on it.

    Other than that one thing I'm good... And I apologize for not dropping a quick post to let you know I was busy...
    "There is a kind of lazy pleasure in useless and out-of-the-way erudition." -Jorge Luis Borges
    Reply With Quote  
     

  35. #34  
    Moderator Moderator
    Join Date
    Jun 2005
    Posts
    1,612
    OK, pal, I forgive you - just this once, mind!

    As to the bit that confused you - I no longer remember what I was trying to say. I think it might have been something like this:

    Fix a point . Then relative to the zero of , is it's own coordinate, obviously.

    But, since we are talking manifolds here, we are entitled - obliged, even - to think locally. So now choose a local coordinate system such that the point is the coordinate. Then we will have the vector space such that, for any choice of basis vector, say the vector will be defined by ; this is just (the centre dot denotes arithmetic multiplication, btw).

    Hence , that is . Noticing that this is also given by , where the last zero is the local coordinate for defined above, we arrive at

    Now choose another local coordinate system such that the point is the coordinate . By the above I will have that then where now the last 1 is the local coordinate representation of above, which again implies .

    I grant you, this is hard to follow, given my rather stupid set-up, but like I said, it is supposed to show that, for any set of basis vectors . where the local coordinates, then if , then might, just might, be true.

    I invite you to swallow it whole, and wait for the necessity.
    Reply With Quote  
     

  36. #35  
    Moderator Moderator AlexP's Avatar
    Join Date
    Jul 2006
    Location
    NY
    Posts
    1,838
    I kind of get it, but yeah, I'll accept it for now so we can move on...
    "There is a kind of lazy pleasure in useless and out-of-the-way erudition." -Jorge Luis Borges
    Reply With Quote  
     

  37. #36  
    Moderator Moderator
    Join Date
    Jun 2005
    Posts
    1,612
    Anyway, let me make a few a few general remarks before proceeding

    We supposed there are open sets to each of which we could assign a coordinate system which we called , respectively.

    We showed there are transition functions .

    I no longer remember if I mentioned it, but these functions are only valid on the intersection . We then went on to define a space of tangent vectors at the point .

    OK, now a bit of necessary rambling. A map between vector spaces is called a transformation, even when it is a map from a vector space to itself,. It is easy to convince oneself of the following in .

    For any vector , the consequence of rotating, say, the axes through the angle is exactly the same as rotating itself through the angle , and that either rotation merely brings onto new components,i.e. transformations act on vector components: period. And if, like me, you are using pencil and paper to convince yourself of this, you can equally well convince yourself that, for any pair of vectors in , the angle between them is unchanged by either the basis transformation (one way) or (non-basis) vector transformation the other way.

    Transformations that leave length and angle unaltered are called isometries.This was my reason for saying (I did, didn't I?) that our tangent vector is a "real" geometric object. And, specifically, where the origin is preserved, they are called orthogonal transformations, and rotations of the sort I just described are orthogonal transformations in this sense.

    So. We are going to discuss vector transformations in the context of the theory we have been developing, that is, we want to know how some arbitrary vector transforms as .

    BUT. Before proceeding, I need to know how much linear algebra you have learned. Specifically, what, if anything, do you know of operator algebra? Don't be shy, I am willing to enter into a short digression on the subject if needed.
    Reply With Quote  
     

  38. #37  
    Moderator Moderator AlexP's Avatar
    Join Date
    Jul 2006
    Location
    NY
    Posts
    1,838
    I'd say I'm pretty good with basic linear, but I know nothing of operator algebra. Certainly willing to learn though.
    "There is a kind of lazy pleasure in useless and out-of-the-way erudition." -Jorge Luis Borges
    Reply With Quote  
     

  39. #38  
    Moderator Moderator
    Join Date
    Jun 2005
    Posts
    1,612
    Ya, OK. I mis-spoke myself. Operators do have an algebra, and a very interesting one at that, but I really meant operator theory. Ah well.....

    We will deal with vanilla vector spaces first.

    A linear transformation, or linear operator is simply a vector valued mapping on some vector space that respects (vector) addition and (scalar) multiplication.

    Consider first the transformation . (Note: for some reason, it is conventional not to include the argument in parentheses).

    I stress again that, for fixed basis, say , transformations act only on vector components. Let's see that in grisly detail.

    Suppose the simplest case: Then if, say , the coefficients will be related as




    Which immediately leads to the conclusion that the transformation is simply the matrix of multiplicative scalars
    . Which is no more that reiterating the classical result that matrices and transformations are in one-to-one correspondence when it comes to vector spaces

    I'm short of time just now - are you OK so far?
    Reply With Quote  
     

  40. #39  
    Moderator Moderator AlexP's Avatar
    Join Date
    Jul 2006
    Location
    NY
    Posts
    1,838
    yep, I'm fine so far. I don't know anything of operator theory either, but again, certainly willing to learn.
    "There is a kind of lazy pleasure in useless and out-of-the-way erudition." -Jorge Luis Borges
    Reply With Quote  
     

  41. #40  
    Moderator Moderator
    Join Date
    Jun 2005
    Posts
    1,612
    Quote Originally Posted by Guitarist
    .
    OK, we must now do a little bit of notational housework. Since any linear transformation on a vector space can be represented as a matrix of the quoted form, let's agree the following:

    By writing a transformation as we will agree that the upper index denotes the column number, and the lower index the row number, each index running over the dimension of our vector space. Thus in
    Quote Originally Posted by me

    is the -th transformed component of the -th original component in the summand. OK with you?

    Now recall we are, in this first instance only talking about orthogonal transformations. I gave a sort of kiddies definition of such transformations, and we can now be a little more precise.

    Since an orthogonal transformation can be represented by a matrix we will call such a matrix an orthogonal matrix, which is defined by , where the superscripted "t" denotes transpose (interchange rows and columns) and the is the identity matrix.

    I gave as a classic case of such a transformation a rotation of Cartesian coordinates about the origin. This is expressed as .

    I will now invite you to satisfy yourself (and me!!) that this is indeed an orthogonal transformation/matrix as defined and has determinant (this last has 2 ways to approach it - computational and logical: try them both. BIG hint: det(AB) = (detA)(detB))
    Reply With Quote  
     

  42. #41  
    Moderator Moderator AlexP's Avatar
    Join Date
    Jul 2006
    Location
    NY
    Posts
    1,838






    Is that satisfactory...? I actually worked it all out on paper, this is the shortened version for the sake of not texing in a bunch of matrices and stuff...

    is the -th transformed component of the -th original component in the summand.
    Not quite getting this...I'm just having a hard time visualizing it.
    "There is a kind of lazy pleasure in useless and out-of-the-way erudition." -Jorge Luis Borges
    Reply With Quote  
     

  43. #42  
    Moderator Moderator
    Join Date
    Jun 2005
    Posts
    1,612
    Quote Originally Posted by Chemboy
    Oh naughty Chemboy, naughty boy. Your sign is wrong!
    This is good, though: always.

    Yes, good
    But you forgot the "TraLa" bit.

    Look, it's fun. As you say, But, as is an orthogonal matrix/transformation, we will have that




    Is that satisfactory...?
    Yes it is, in the sense you tried. Good effort, even though the detail was not quite as I would like it. But - we learn by doing.....
    Reply With Quote  
     

  44. #43  
    Moderator Moderator AlexP's Avatar
    Join Date
    Jul 2006
    Location
    NY
    Posts
    1,838
    Your sign is wrong!
    ah, yeah. Simple mistake, but a bad one, I know.

    I understand the part with upon seeing that.

    Yeah...I honestly haven't had much if any exposure to (as I think serpicojr once put it) "mathematical rigor," but it's probably something one develops as they delve deeper into mathematics, so hopefully I'll pick it up as we go along.
    "There is a kind of lazy pleasure in useless and out-of-the-way erudition." -Jorge Luis Borges
    Reply With Quote  
     

  45. #44  
    Moderator Moderator AlexP's Avatar
    Join Date
    Jul 2006
    Location
    NY
    Posts
    1,838
    I might be online tomorrow afternoon, but I'll be internet-less for tomorrow night and Saturday night. I promise I'll be back right after that though.
    "There is a kind of lazy pleasure in useless and out-of-the-way erudition." -Jorge Luis Borges
    Reply With Quote  
     

  46. #45  
    Moderator Moderator
    Join Date
    Jun 2005
    Posts
    1,612
    Then give me a beep when you're ready to proceed.

    We're almost done, btw (unless you want to swim into deeper waters!)

    Cheers -b-
    Reply With Quote  
     

  47. #46  
    Moderator Moderator AlexP's Avatar
    Join Date
    Jul 2006
    Location
    NY
    Posts
    1,838
    I have returned and am ready to go. Thinking I might want to swim into deeper waters...but I'll see where you go with whatever you have left.
    "There is a kind of lazy pleasure in useless and out-of-the-way erudition." -Jorge Luis Borges
    Reply With Quote  
     

  48. #47  
    Moderator Moderator
    Join Date
    Jun 2005
    Posts
    1,612
    OK, good. Actually, all the hard work is now behind us (well for now....).

    Recall we are trying to extract a transformation rule for a single tangent vector when that vector is defined to be an element in the tangent space at the point and where the local coordinates apply to and the apply to .

    Recall also that the transition function is (a) a continuous function on a topological manifold, (b) is defined only on the intersection , and (c) we might reasonably expect this topological function to induce a vector transformation when the coordinate basis vectors are .

    Let's start with the simplest imaginable case, a strictly Euclidean space with Cartesian coordinates. Suppose that is a standard position vector. Then the orthogonal transformation will simply be a mapping , let's say, or .

    From this we deduce that . Using to inform us that the may be considered as the independent variables for the transition function, we may differentiate the above w.r.t. the and find that

    *

    We also said that, for fixed basis, vector transformations act only on vector components. Without further ado, we may simply deduce that, where , and that this implies

    **

    Subbing * into ** gives

    .

    This is the transformation law for a "Cartesian" or "affine" vector. This is important, so ask questions if you need to, but remember, this simple(ish) extraction was only possible in the case of orthogonal transformations on a Euclidean space with Cartesian coordinates.

    Astonishingly, we will see that it holds generally, but that's a tale for another time
    Reply With Quote  
     

  49. #48  
    Moderator Moderator AlexP's Avatar
    Join Date
    Jul 2006
    Location
    NY
    Posts
    1,838
    I think I'm good...
    "There is a kind of lazy pleasure in useless and out-of-the-way erudition." -Jorge Luis Borges
    Reply With Quote  
     

  50. #49  
    Moderator Moderator
    Join Date
    Jun 2005
    Posts
    1,612
    Umm.... I wonder. Let's hope so, as I tend to get grumpy if I feel I'm being strung along. Hey! Let's say you are OK with all the preceding.

    We now go to the general case (i.e. transformations that are not necessarily orthogonal on Cartesian coordinates).

    Recall we defined a tangent vector space at , and insisted that, for and we will have that

    where the are the local coordinate systems on , respectively.

    I also asserted that (btw, I asked about this on a math forum, and was told the "proof" I offered earlier is OK.).

    By the assertion that the are uniquely determined by the relation , I may express the above as which is exactly as we saw for the trivial case.

    So that

    (chain rule!)

    And since we want this to be true for all we extract the laws





    for the invertible basis vector transformations on
    Reply With Quote  
     

  51. #50  
    Moderator Moderator AlexP's Avatar
    Join Date
    Jul 2006
    Location
    NY
    Posts
    1,838
    Quote Originally Posted by Guitarist
    I would have that the second term of the sum would be ...
    "There is a kind of lazy pleasure in useless and out-of-the-way erudition." -Jorge Luis Borges
    Reply With Quote  
     

  52. #51  
    Moderator Moderator
    Join Date
    Jun 2005
    Posts
    1,612
    Quote Originally Posted by Chemboy
    Quote Originally Posted by Guitarist
    I would have that the second term of the sum would be ...
    Whaaat? The second term on the RHS is the basis vector being transformed by the first term to give the basis vector on the LHS. Check your understanding of the chain rule!!

    OK, let me say a few words about something that should be very familiar (if you have done linear algebra), and then introduce a concept you likely won't have encountered.

    To any vector space I can associate a dual space which is the space of all linear maps from to its underlying field: , say. These maps are called linear "functionals". Let's have a look at one;

    for any and some , I will always have that . In the case that is an inner product space, I may also have that, if , then there is some element such that

    Actually we are not going to insist our tangent spaces are inner product spaces, but we will insist that the basis of the vector space is given by where the set is a basis for the vector space .

    It should be obvious that this describes an isomorphism between these 2 vector spaces (recall that any 2 vector spaces of equal dimension are isomorphic), but that, since the choice of basis for is entirely arbitrary, then this isomorphism is not "natural", in the sense that a different choice of basis will induce a different isomorphism.

    Elements in are called "dual vectors" or "linear functionals".

    So, we have a tangent vector space , and we can now define it's dual space . Note this is defined at the same point, which is crucial.

    Then, for each I may have that , as before. Note this crucial point: if , the expression has no meaning.

    OK. In order to find a coordinate basis set for as above, I will have to introduce you to something you probably haven't encountered before. In its full generality it is a deep and interesting subject, but we will scratch the surface as follows.

    Given a vector space of dimension , a -form is a choice of (number) vectors which, taken together may be considered as a single "vector". That is not well expressed, but you merely need to note this:

    a 1-form, by this definition, is simply a vector as we know it, and the axioms of differential forms, as they are called, indicate that they are elements in . I don't see the need to explain this either, but this is what is important::

    a zero-form is, by definition a field-valued function. There is an operation on p-forms called the "exterior derivative" that sends a p-form to a (p+1)-form. It follows that, if is a zero form, then is a 1-form.

    Digest this if you can!
    Reply With Quote  
     

  53. #52  
    Moderator Moderator AlexP's Avatar
    Join Date
    Jul 2006
    Location
    NY
    Posts
    1,838
    What exactly does represent? Want to make sure I'm clear on that...

    So these -forms... Is there a correlation between the dimension of the vector space and ?

    is a space of linear functionals. These linear functionals map from our vector space to its underlying field. So...they take a vector and...give you what exactly? A scalar, but...where's it come from?

    So our -forms are vectors that are linear functionals since they're elements of , correct?

    If a -form is a field-valued function, is a -form then a vector-valued function...? Maybe I've gone wrong there...since these -forms are linear functionals and thus mapping to ...
    "There is a kind of lazy pleasure in useless and out-of-the-way erudition." -Jorge Luis Borges
    Reply With Quote  
     

  54. #53  
    Moderator Moderator
    Join Date
    Jun 2005
    Posts
    1,612
    First, I strongly recommend you don't try to think about p-forms in general - we can talk about them elsewhere, if you want. All you need to know is there is a 0-form and a 1-form . These are just fancy names for functions and linear functionals, respectively.
    Quote Originally Posted by Chemboy
    What exactly does represent?
    I'm afraid I don't understand your question. We saw Oct 27 that these are the components of an affine (orthogonal) transformation , which by my post Oct 28 seems to apply generally.

    is a space of linear functionals. These linear functionals map from our vector space to its underlying field. So...they take a vector and...give you what exactly? A scalar, but...where's it come from?
    I will resist the temptation to be cute and say they come from the scalar field, since I was going to go on and explain anyway..

    So, we let be an arbitrary 0-form, and define the 1-form by requiring that - a scalar - for all

    But since each is a coordinate function - a 0-from - I may rewrite the above as as before.

    So let's say that, for the coordinate basis vectors the coefficients . Then I will have that when i = j, and zero otherwise.

    Or, , which defines the to be a basis for .

    So, here's a challenge: using the obvious fact that , try to explain to me why the above is a necessary and sufficient condition for this to be a basis for . For extra credit, argue the following: the condition that the scalar induced by the action of a single basis vector in on an arbitrary vector in will be a component of that vector is a unique property of the basis vectors (I do not expect you to get that last bit, btw, but you might at least give it some thought......)
    Reply With Quote  
     

  55. #54  
    Moderator Moderator
    Join Date
    Jun 2005
    Posts
    1,612
    Let me quickly apologize for my "challenge". I think it extremely unlikely that I could rise to it, given the information provided. I see now I explained the whole thing rather poorly - lemme see if I can dream up another way.......
    Reply With Quote  
     

  56. #55  
    Moderator Moderator AlexP's Avatar
    Join Date
    Jul 2006
    Location
    NY
    Posts
    1,838
    ok... I was going to give it a good try at least. Mainly just wanted to let you know that I'm still with you but I've been busy. Though now that I look at it...that was only 2 days ago. Anyways, I'll get back to this tomorrow night if you have anything new up...
    "There is a kind of lazy pleasure in useless and out-of-the-way erudition." -Jorge Luis Borges
    Reply With Quote  
     

  57. #56  
    Moderator Moderator AlexP's Avatar
    Join Date
    Jul 2006
    Location
    NY
    Posts
    1,838
    Quote Originally Posted by Guitarist
    So, here's a challenge: using the obvious fact that , try to explain to me why the above is a necessary and sufficient condition for this to be a basis for . For extra credit, argue the following: the condition that the scalar induced by the action of a single basis vector in on an arbitrary vector in will be a component of that vector is a unique property of the basis vectors (I do not expect you to get that last bit, btw, but you might at least give it some thought......)
    It makes sense...I'm just not sure of how to explain it any further...
    "There is a kind of lazy pleasure in useless and out-of-the-way erudition." -Jorge Luis Borges
    Reply With Quote  
     

  58. #57  
    Moderator Moderator
    Join Date
    Jun 2005
    Posts
    1,612
    I just thought of another way to do this, I think.

    Forget all about "forms" and exterior derivatives - this is good advice, since some authors use the word 1-form to refer to a field rather than a vector, which is confusing.

    Right. This will be long. There are lot sof equations, none of which is particularly hard, but you need to keep your wits about you, so take it slow.

    Suppose that are smooth manifolds, and let be a smooth map such that, for some point . I do not require this to be a homeomorphism.

    Now, if is a tangent space at , what can we say about the tangent space ? Since only understands points, and has never heard of vectors, we may not have that ; but this mapping, by virtue of must have something to do with .

    So I introduce the related map , where I attach no particular meaning to the - it's just a distinguishing label at this time.

    Obviously, if is a basis for , then I may have as a basis for .

    Now consider the real function ,Recall that any is trivially an n-dimensional manifold (by our original definition of the local homeomorphisms). So, for the point there corresponds a point . and a 1-dimensional "tangent space" .

    As before, define the map . BUT.... is also a 1-dimensional vector space, so and , that is . You can extract the basis exactly as before.

    This can only mean that, if then by the definition of a basis. (*)

    We can now discover what is. Recall we had that, by definition, . (**)

    Replacing the arbitrary vector by the basis vectors I find from (*) and (**) that

    . That is, (***)

    Putting (*), (**) and (***) together I conclude that


    . Oops! This is edited a to f

    Which is just the standard way of writing the total derivative of a function in several variables, so that is the meaning of .

    Any better?
    Reply With Quote  
     

  59. #58  
    Moderator Moderator AlexP's Avatar
    Join Date
    Jul 2006
    Location
    NY
    Posts
    1,838
    Crystal clear now. Hopefully.

    So if we're working with a 2-manifold will give us 2 scalars, since the tangent space will contain 2-dimensional vectors, and thus have two standard basis vectors, and thus a scalar associated with each of those standard basis vectors?

    EDIT: Or do we sum these scalars? If we do, I would liken it to taking the trace of a square matrix...which I find interesting and wonder if there's some connection...

    What does a 2-form map from and to? Even if you weren't planning on going there, I'm curious. I've been thinking maybe it has something to do with the tangent and cotangent bundles on the manifold, but of course I could be wrong...
    "There is a kind of lazy pleasure in useless and out-of-the-way erudition." -Jorge Luis Borges
    Reply With Quote  
     

  60. #59  
    Moderator Moderator
    Join Date
    Jun 2005
    Posts
    1,612
    Quote Originally Posted by Chemboy
    So if we're working with a 2-manifold will give us 2 scalars, since the tangent space will contain 2-dimensional vectors, and thus have two standard basis vectors, and thus a scalar associated with each of those standard basis vectors?

    EDIT: Or do we sum these scalars?
    Yes, we sum.

    Look, Let's get our terminology straight. I introduced the element and its definition for "teaching purposes" only. In general, I will call an element of the cotangent space a "covector", if that's OK with you, and write, say, for the action of a covector on a vector. For each there is a (possibly not unique) scalar

    Now. A basis vector is just that - a vector. So a 2-space has 2 basis vectors So we expect the action of a covector on each of these to give us a scalar for each of these, namely 2 scalars.

    But - an arbitrary vector in is defined as the arithmetic sum of the scalar products of the basis vectors. Thus the action of some is but ONE scalar, which is the arithmetic sum of the action of the components of our covector on the components of our vector.

    If we do, I would liken it to taking the trace of a square matrix...which I find interesting and wonder if there's some connection...
    ]Yes, there is, but not quite in the way you might think. I advise you to leave that for now.

    What does a 2-form map from and to? Even if you weren't planning on going there, I'm curious. I've been thinking maybe it has something to do with the tangent and cotangent bundles on the manifold, but of course I could be wrong...
    Please forget about 2-forms, 3-forms,.... in the thread. If you want to talk about them elsewhere, I would be happy to pitch in, for for now they are something of a distraction.

    Anyway, as I am not feeling especially intelligent today, allow me a little ramble. If you are reading around this subject (it appears you are - this is GOOD), you will no doubt have come across the terms "covariant" and "contravariant". I will explain what this means, and then promise to come to your house and eat your goldfish if you ever use them again in this thread. Deal?

    In the theory we are trying to develop, it is taken as axiomatic that scalars are invariant under linear transformations. In fact, we may take this as a definition of a scalar (that's what the physics jocks do, bless them).

    Suppose there is some weird transformation on coordinates only that somehow "expands" them by a factor of 2, so that on original coordinates becomes on these new coordinates. This is, as we saw before, the same as the vector transformation that simply "shrinks" our vector by one half on its original coordinates.

    In other words, our vector transforms "oppositely" to the way the coordinates transform. Old guys with beards and sandals call the vector a contravariant vector for this reason.

    Suppose now we have some scalar given by . As scalars are invariant under transformations. we must have that , that is the covector transforms with the coordinates.

    Geriatrics call this a covariant vector. As we are not quite that old, we will eschew this terminology
    Reply With Quote  
     

  61. #60  
    Moderator Moderator
    Join Date
    Jun 2005
    Posts
    1,612
    Quote Originally Posted by Chemboy
    ..... tangent and cotangent bundles on the manifold,
    Hold tight - we are going to talk about bundles very soon. I warn you, though, they will do your head in - totally!!
    Reply With Quote  
     

  62. #61  
    Moderator Moderator AlexP's Avatar
    Join Date
    Jul 2006
    Location
    NY
    Posts
    1,838
    ok, I have all of that. Where I've heard those 'c' words that I'm not to mention for the life of my goldfish mainly is in the context of tensors (something I'd love to get to at some point...).
    "There is a kind of lazy pleasure in useless and out-of-the-way erudition." -Jorge Luis Borges
    Reply With Quote  
     

  63. #62  
    Moderator Moderator
    Join Date
    Jun 2005
    Posts
    1,612
    We can get to tensors directly, in a trivial sort of way: in the present context, an element in is called a type (1, 0) tensor, elements in are type (0,1) tensors. We can come back to that if you want, but now I want to start building a precarious house of cards.

    We have that, at every point a tangent space whose elements are vectors, and a cotangent space whose elements are covectors. We may say that a vector field on is a rule that selects exactly one vector form each and assigns it to its "host" point p.

    This has a nice intuitive feel, but it suffers from this defect: I cannot tell you what that "rule" is - there isn't one! So we need to be a bit more sohisticated, though we will see this will result in a definition that is only fractionally less arbitrary.

    We will denote by the set-theoretic union of all the tangent spaces on and call it the tangent bundle. Then any "section" of will again be a selection of just one tangent vector from each tangent space, and we will call this a vector field.

    Clearly, I can choose as many different sections as I want, each of which will give me a different field. So here's our first jaw dropper -

    The collection of all possible vector fields on is itself a vector space!!

    Don't worry - it gets worse.

    The definition of the vector bundle I gave is deficient in a number of ways, which we will come to, but for now, it is but one example of what's called a fibre bundle, which I think are nice things to talk about, but first let's do this:

    Suppose I have a vector field. Starting at the point and following the "direction pointed to" by the vector there, I will arrive at the point , where again I follow the instructions of the vector there, and so on. I will have traced out a curve called an integral curve.

    Notice I will not have included all points, so I will start again, and again, and again,.... until all points are in some integral curve. The collection of these integral curves is called a congruence, about which I only want to make the following points:

    Since all points are covered by the congruence, we will say it "fills" . Second there is a theorem that tells us that no 2 curves in a congruence may intersect. Third, any congruence on is itself (usually) a manifold!!

    Oh no! Whatever next? Wait till we do fibre bundles in detail......
    Reply With Quote  
     

  64. #63  
    Moderator Moderator AlexP's Avatar
    Join Date
    Jul 2006
    Location
    NY
    Posts
    1,838
    A vector field is generated by a vector-valued function, so if is the vector space of all possible vector fields on , can it be considered the vector space of all the vector-valued functions forming these vector fields? I've taken that view so that is a space of functions and not vector fields...since a space of vector fields seems a little out there...I don't see how vector fields could satisfy the vector space axioms and such.

    EDIT: I'll be gone for the next 2 days but promptly back to the computer after that.
    "There is a kind of lazy pleasure in useless and out-of-the-way erudition." -Jorge Luis Borges
    Reply With Quote  
     

  65. #64  
    Moderator Moderator
    Join Date
    Jun 2005
    Posts
    1,612
    Quote Originally Posted by Chemboy
    A vector field is generated by a vector-valued function,
    It is? So what is/are this/these functions(s)? Specifically what are the domains, codomains and ranges?
    so if is the vector space of all possible vector fields on , can it be considered the vector space of all the vector-valued functions forming these vector fields?
    Since I don't know what these functions are, I cannot say.

    Well OK, there is a way to do it like that, but not quite in the way you are suggesting. Specifically it leads to the conclusion that the the space is a real Lie algebra. We can talk about that if you want
    I don't see how vector fields could satisfy the vector space axioms and such.,
    Ya, well OK, I was a bit skimpy over that - mainly because fields don't quite float my goat, dunno why....EDIT: Well part of the reason is because I have never understood the connection between this way of defining a field and the abstract mathematical definition of a field as an integral domain with multiplicative inverse - if only JaneBennett were here to help us out with this!

    It's easy really. Suppose are vector fields. Then, obviously, at the single point I will have a single vector as an element in and likewise a as an element in at the same point.

    So the vector space axioms apply to our fields at each point in , so we might write, say etc. to indicate that our fields are pointwise scalar multiplicative and vector additive, where the notation indicates evaluation of the field at the point p

    In other words, as ranges over (an open subset of) , the class of all vector fields do indeed form a vector space under the usual axioms.

    Umm. Where do we go from here? I sort of lost my way for a while there, lemme think on it, or BETTER ask questions!!!
    Reply With Quote  
     

  66. #65  
    Moderator Moderator AlexP's Avatar
    Join Date
    Jul 2006
    Location
    NY
    Posts
    1,838
    I found a computer with internet!

    I see how our vector fields form a vector space now, so I'd like to just withdraw my comments about the vector-valued functions and all that, if that's ok. I was just trying to make sense of things but now they make sense without looking at it in that way.

    I believe you were headed toward fiber bundles...(us Americans spell it 'fiber') which I'd love to see. I must say that right at that point where you introduced the tangent bundle and vector fields and such on our manifold this all just took a big leap in how interesting it is. Not that it wasn't before, but I must say I'm thoroughly enjoying this.
    "There is a kind of lazy pleasure in useless and out-of-the-way erudition." -Jorge Luis Borges
    Reply With Quote  
     

  67. #66  
    Moderator Moderator
    Join Date
    Jun 2005
    Posts
    1,612
    Hi Chemboy: Glad you're finding it interesting. So fibre bundles it is. But first some foreplay, which is all I will time for just now:

    Recall that, if be sets, we can always form the Cartesian product which is a single new set and where a typical single point is described as - an ordered pair. Recall also we may have that .

    A similar construction is allowed on manifolds: suppose is a manifold (uhh? suppose??? it is!). Then we may form the manifold (actually I missed out a coupla steps, but you will get the general idea).

    Now suppose is a set. We will call this a group iff the following are true:

    There is a binary operation on such that . This implies our operation is "closed".

    There is a unique element such that called the "identity" for the group.

    For each there is a unique element such that . This is called the inverse and is written .

    It is customary to omit all reference to the * operation, and simply write , it being clearly understood that this does not always mean arithmetic multiplication - it may be addition, matrix multiplication or what all have you.

    So, I will try to dream up a worked example of a simple tangent bundle in such a way that the generalizations are fairly obvious. But, just for now, I gotta run
    Reply With Quote  
     

  68. #67  
    Moderator Moderator AlexP's Avatar
    Join Date
    Jul 2006
    Location
    NY
    Posts
    1,838
    Group theory...another thing I'd like to learn sometime. I've been thinking about what the...if I can say it this way...nature of the identity element is like when our binary operation is the Cartesian product. It seems like it would have to be..."nothing," so that when it's combined with in a tuple all you obtain is , which means the identity can't even be because then you'd have or . I'm also wondering what the inverse is like. I definitely understand, I'm just not sure what the identity actually is. But maybe this is one of those cases where I should just accept it and not read into it...
    "There is a kind of lazy pleasure in useless and out-of-the-way erudition." -Jorge Luis Borges
    Reply With Quote  
     

  69. #68  
    Moderator Moderator
    Join Date
    Jun 2005
    Posts
    1,612
    Quote Originally Posted by Chemboy
    .nature of the identity element is like when our binary operation is the Cartesian product.
    Ah, now, wait. The binary operation is an operation ON the Cartesian product. Here's a silly example. Suppose denotes the integers. Then I insist that . And if is to be a group, I will have that for all and which is hardly rocket science. Then and under addition

    And when the group operation is arithmetic multiplication, say we will have that , so since .

    Obviously, it is not always quite so simple , but that is the general idea.
    Reply With Quote  
     

  70. #69  
    Moderator Moderator AlexP's Avatar
    Join Date
    Jul 2006
    Location
    NY
    Posts
    1,838
    Oh, duh. haha. I had a bit of a dull moment there. ok, I'm fine now.
    "There is a kind of lazy pleasure in useless and out-of-the-way erudition." -Jorge Luis Borges
    Reply With Quote  
     

  71. #70  
    Moderator Moderator
    Join Date
    Jun 2005
    Posts
    1,612
    Glad to see I am not the only one who has these moments! But, lookee - I realize there is something about fibre bundles that I don't quite know how to explain in simple(ish) terms, so there will be a short break in transmission,

    Stay tuned.....
    Reply With Quote  
     

  72. #71  
    Moderator Moderator AlexP's Avatar
    Join Date
    Jul 2006
    Location
    NY
    Posts
    1,838
    ok. I checked out fibers and fiber bundles on Wiki and Mathworld and I get the idea, but I'm not really getting how it'll tie into a manifold setting, which is why your personal explaination will be valuable, since we're working in that setting.
    "There is a kind of lazy pleasure in useless and out-of-the-way erudition." -Jorge Luis Borges
    Reply With Quote  
     

  73. #72  
    Moderator Moderator
    Join Date
    Jun 2005
    Posts
    1,612
    So, we'll start with the real 1-sphere , known to its friends as the circle. We'll call this our "base manifold" . As we know each point lies in at least one open neighbourhood with a coordinate "system" such that ( means "homeomorphic",btw)
    .
    At the point we will have a tangent space for which a natural (but not unique) choice of basis vector is - no partials! - so that any vector in may be expressed as .

    We see straight away that each tangent space over is in fact the 1-dimensional vector space . We will call this a "fibre" at the point and write, as a generalization, ; try to imagine
    "attached" to at the point [/tex]p[/tex] for all points. We will tentatively call this our fibre bundle, which in the present case is the tangent bundle

    Try to convince yourself, first that the product space of a line with a circle is a cylinder i.e. (generalizing ) is 2-dimensional space whose elements are of the form , since all fibres are identical, so vectors in need 2 coordinates to specify them - which fibre they belong to, and where on this fibre they live.

    Now, homeomorphism is transitive - that is, . So (Note to purists: I am aware this is a wild waving of hands, since it doesn't follow that homeomorphism is necessarily preserved by the product, but I think it will do for now)

    Now try to convince yourself that, locally, . If you can succeed in that you will have found that is a 2-manifold. The condition that locally means of course for any .This is wrong - see Nov 15

    This is called "local trivialization", and is part of the required definition of any fibre bundle.

    In fact, in our present example, there is a global homeomorphism , so is globally trivial, which is unusual, undesirable in general and therefore not part of the definition.

    (Grinding of gears....) The Cartesian product of any 2 sets comes equipped with natural maps called "projections".

    In the present case we have locally, effectively, and the projection as part of the definition of our bundle. What about the other projection, say ? Well it makes no sense, since all our fibres, our "copies" of are identical, so we need a different sort of construction that will tell us exactly how our bundle is formed.

    But I've out-stayed my welcome for now and, besides, I am supposed to be working
    Reply With Quote  
     

  74. #73  
    Moderator Moderator AlexP's Avatar
    Join Date
    Jul 2006
    Location
    NY
    Posts
    1,838
    Do we have two basis vectors for because of the fact that our fiber bundle is 2-dimensional? Since is at a point , won't the s be the same since they're at the same point and of the same variable since we're just on a 1-manifold? I just don't see why we have two basis vectors for a 1-dimensional tangent space... I'm just a little stuck on this...hope it's not one of those really obvious things I'm missing... Other than that I'm doing ok...
    "There is a kind of lazy pleasure in useless and out-of-the-way erudition." -Jorge Luis Borges
    Reply With Quote  
     

  75. #74  
    Moderator Moderator
    Join Date
    Jun 2005
    Posts
    1,612
    Quote Originally Posted by Chemboy
    Do we have two basis vectors for because of the fact that our fiber bundle is 2-dimensional? Since is at a point , won't the s be the same since they're at the same point and of the same variable since we're just on a 1-manifold? .
    Sorry, That was a stupid way for me to put it.

    You are right, there is only a single basis vector for the tangent space at I think (if I recall correctly) I was simply trying to point out that any vector in this space (other than the basis vector) can be decomposed as the linear sum of any other two non-basis vectors. Apologies for that.

    Recall we had the projection . This map is "highly surjective", in the sense that the preimage where this equivalence is my attempt to generalize, and where is a neighbourhood of . In words, the preimage of the projection map is the fibre at . This is, in fact, part of the definition of a projection (obvious, right?)

    Let us now define a group of homeomorphisms for the present case. We see this is a group by:

    Inverse,
    and therefore there exists
    Identity
    Closure.

    This group is called the "structure group" of our fibre bundle, which, in this case, is the tangent bundle . I will define it for the present case soon, but let me quickly say that it arises because there may be more that one way to attach our fibres to the base manifold, or, what amounts to same thing, given a base manifold and a set of fibres, we need to distinguish between the different bundles that can be formed from these ingredients, and this is what our structure group does.

    This is point at which I start to have trouble explaining.

    Leave it with me a while
    Reply With Quote  
     

  76. #75  
    Moderator Moderator AlexP's Avatar
    Join Date
    Jul 2006
    Location
    NY
    Posts
    1,838
    I just spent a bunch of time on this and I feel like I'm good with it. One little thing though (but this doesn't hinder my learning the rest of the stuff)...

    In fact, in our present example, there is a global homeomorphism , so is globally trivial, which is unusual, undesirable in general and therefore not part of the definition.
    Why exactly does this make trivial and what's unusual and undesirable about it? I think from my lack of higher-level mathematics like this I'm just kind of unfamiliar with what 'trivial' means from a mathematical point of view. I have a good idea of course, but an explaination in this area wouldn't hurt.
    "There is a kind of lazy pleasure in useless and out-of-the-way erudition." -Jorge Luis Borges
    Reply With Quote  
     

  77. #76  
    Moderator Moderator
    Join Date
    Jun 2005
    Posts
    1,612
    A "bunch" of time?? Really, you Americans crack me up sometimes!

    Anyway, I'm not sure I know the full answer to your question. Let's see.....

    By our definition of a manifold (locally homeomorphic to some ), we see that is trivially a manifold. You may take this to mean that "locally" is redundant, and the definition is "trivially" satisfied.

    Now is a manifold, locally homeomorphic to the 2-cylinder, which is just a "rolled up" 2-plane (homeomorphic to it, iow). But it is also globally homeomorphic to it, so, by the above, the definition is trivially satisfied, albeit at one remove, as it were.

    As to why the local homeomorphism is called "local triviality" I really cannot say - I just think of it as being a definition. Maybe that's wrong of me, dunno. Looking back,I see I didn't explain it very well - in fact I erred, which I will fix. Let's try again.

    Recall I said there is a projection and that the preimage , the fibre over . Suppose is a neighbourhood in . Then I should find that is the bundle over ie a local bundle. By the assertion that locally , I intended , and not what I first said.

    *blush*

    Anyway let's look at our recipe for a general fibre bundle . This a manifold of dimension , and will be called a fibre bundle if the following hold:

    there is a base manifold of dimension

    a typical fibre of dimension (all fibres being identical)

    a projection such that for , the fibre over .

    a structure group characteristic to

    local triviality given by .

    I need to say something more about our structure group, but first we will need to think about orientation. Now you will find several definitions out there, but we are just going to go with intuition. Let's stick with our 1-sphere.

    Let's assume we know what meant by (anti-)clockwise for a vanilla circle, and suppose that for any arc-segment of the circle, this still makes some sort of sense, Then, when we think of the 1-sphere as a manifold, these arc-segments are coordinate neighbourhoods. We will say that two such neighbourhoods are consistently oriented if they are both oriented (anti-)clockwise. If all neighbourhoods are consistently oriented, we will say are manifold is orientable.

    Now since our tangent vectors at, say are simply directional derivatives of the coordinate there, it is easy to see the tangent space "inherits" its orientation form the neighbourhood.

    Armed with that we are ready to proceed, but I need to pause for breath a while
    Reply With Quote  
     

  78. #77  
    Moderator Moderator AlexP's Avatar
    Join Date
    Jul 2006
    Location
    NY
    Posts
    1,838
    I'll be sure to comment the next time you say something that sounds funny to me. :wink:

    I understand your correction and the ingredients for a general fiber bundle make sense. I'm ready to proceed.
    "There is a kind of lazy pleasure in useless and out-of-the-way erudition." -Jorge Luis Borges
    Reply With Quote  
     

  79. #78  
    Moderator Moderator
    Join Date
    Jun 2005
    Posts
    1,612
    So. The structure group for our tangent bundle with typical fibre the vector space . Now working with the reals is a bit of a pain, in that we have to remember when we are treating them as a set, a field, a vector space etc. But ultimately we will find it rewarding.

    We will suppose there is a neighbourhood where the coordinate "system" is . Let's say and write to denote the fibre at . We will also say that a typical vector in is .

    I define the homeomorphism by where I am here treating as the real field. We will now suppose that there is a neighbourhood with coordinate "system" and that . Then clearly there will be a fibre where the same vector and another homeomorphism .

    By definition, homeomorphisms are invertible, so I may have . Now are any real numbers, so this map is just multiplication by any non-zero real number (non-zero, since if a vector is the zero vector in one coordinate system, it is the zero vector in all coordinate system). So the set of all non-zero reals.

    This set is a group under multiplication, as you may easily show, and this is a possible structure group for . But we can do better.

    It is easy to see that if are consistently oriented, then so are all fibres in the local bundles . So these fibres lie "head to head" and "tail to tail". In this circumstance the structure group maps positive vector components to positive components, negative to negative, so the structure group is the multiplicative group of positive reals.

    In other words, if my structure group is I am allowing for not to be orientable - you can think of this inducing a bundle that is homeomorphic to the Moebius band, if the group is it is homeomorphic to the cylinder.
    Reply With Quote  
     

  80. #79  
    Moderator Moderator AlexP's Avatar
    Join Date
    Jul 2006
    Location
    NY
    Posts
    1,838
    A couple things...

    We have a homeomorphism . So this is a homeomorphism between a vector space and a field...? I was thinking homeomorphisms are between topological spaces, so I'm kind of confused. Maybe we can do this because and can be considered as topological spaces...?

    Same thing with ... The homeomorphism's between vector spaces? And given I'd think it would be more like ...though it is going "through" the vector spaces...

    I have a little more but I have to go for now. I shall return.
    "There is a kind of lazy pleasure in useless and out-of-the-way erudition." -Jorge Luis Borges
    Reply With Quote  
     

  81. #80  
    Moderator Moderator AlexP's Avatar
    Join Date
    Jul 2006
    Location
    NY
    Posts
    1,838
    ok... the other thing is, if and are consistently oriented and as a result so are all the fibers in their local bundles, it really seems to me that they would be "head-to-tail"... Because when the structure group is , can we say "the orientation of the fiber is preserved?" And so I'd think if and are consistently oriented their fibers would be oriented the same and so a vector from one fiber would point to the tail of the vector of another fiber...
    "There is a kind of lazy pleasure in useless and out-of-the-way erudition." -Jorge Luis Borges
    Reply With Quote  
     

  82. #81  
    . DrRocket's Avatar
    Join Date
    Aug 2008
    Posts
    5,486
    Quote Originally Posted by Guitarist
    Hi Chemboy, good weekend? Glorious weather here, less so now.

    As for hints; I used as an intuititional guide a result from elementary linear algebra that goes like this.

    Suppose is a vector space with inner products. Let the set denote the basis vectors. Then any can be expressed as , where the are scalar.

    Now the inner product of an arbitrary vector with any basis vector will be denoted by (since inner products are bilinear) hence where the Kronecker delta is 1 when i = j, and zero otherwise.

    Obviously, in the present case we don't have an inner product, but we can see how it might work.

    In particular note that, if then .

    This is NOT a proof, just a guide. The proper proof is tedious, in fact I freely confess I don't really understand it. Let's just do what most folk do - swallow it whole and move on. In due course we will see that it must be so.
    Your proof has a problem. It works if and only if the basis vectors are orthonormal. This happens if the inner product is a dot product that is determined by the basis with which you started. It is also possible to start with an arbitrary basis and construct an orthonormal basis using the Gram-Schmidt process.
    Reply With Quote  
     

  83. #82  
    Moderator Moderator
    Join Date
    Jun 2005
    Posts
    1,612
    DrRocket: Since I said that the following
    Quote Originally Posted by I
    then .
    is not a proof, I do not fully understand your point.

    However, irrelevant as it may be, let me say I don't think I agree with you. My "reasoning" (if that's what it was) was as follows. One may assume that . And if are independent, then . That is, I believe,

    My coordinates are independent by construction.

    You may disagree with with my use of Kroenecker, but, in the above, I was using him multiplicatively. That is, when one has that , and when one has that . Hence my last identity above.

    I do not see how this requires an IP or orthonormality.

    If I am wrong, I implore you, for the benefit of others, to say so.
    Reply With Quote  
     

  84. #83  
    Moderator Moderator
    Join Date
    Jun 2005
    Posts
    1,612
    Chemboy: This will be a post of groveling apologies. First, I only just spotted your last two. Sorry.

    Second, I believe you may be being misled by a quite conventional abuse of notation, which was compounded by my rather injudicious use of language, Sorry again.

    Third, my expression "head to head" was just stupid; your "orientation-preserving" is much better. Sorry again

    I hope you don't need this, but let's do it anyway. A vector space comprises an additive abelian group together with a scalar field (plus a few axioms). Abelian means elements commute under the operation, btw. Although the world, his wife and his dog write for a vector space, they really mean (or woof! as the case may be). is for "Korper", the german for field

    The point being that without the field, is in reality just an abelian group, and not a very exciting one at that.

    The converse is not true, however; the field axioms mandate that is additively commutative, so that can be regarded as a vector space of dimension one over itself. In fact, there is a thm, form linear algebra that states that any vector space of dimension is isomorphic to the n-th "Cartesian power" of .

    This why one writes - can't be bothered with all this "blackboarding" - it is to emphasize that this is the first Cartesian power of this field, which is a vector space. But it is still a field. (Simply recall the law of exponents!)

    So when I said "I am treating as a field" this is roughly what I meant. I dare say could have put it better perhaps. Sorry.

    In like fashion, when I say that we have a map I, like everyone else, am being slack. The fibre is an integral part of the bundle manifold , so my map is in reality a map on the manifold , and therefore qualifies as a homeomorphism (subject to continuity and invertibility). Remind me to expand on this, as I haven't said enough to make completely transparent. Sorry again.

    Now recall that for a garden variety vector space, the transformation is just a mapping from the components of to the components of . We do not think of this as a map on scalar fields, though, for the reasons I gave above.

    In general our mappings , or to be sloppy, will be of this form. (The further study of these will get us into Lie groups which are nice things to talk about, but quite hard, I think). In my last post to you, I went through the coordinate transitions in order to relate orientation to the structure group.
    Reply With Quote  
     

  85. #84  
    Moderator Moderator AlexP's Avatar
    Join Date
    Jul 2006
    Location
    NY
    Posts
    1,838
    ok, I understand that is a homeomorphism now. So a homeomorphism can map from a manifold to itself?

    So is a vector space, and a field, and part of a manifold. Can we say it's a set that simply meets the requirements for being a vector space and meets the requirements for being a field?

    This homeomorphism ... I'm just confused... It seems to be mapping a vector to its underlying scalar which in this case kind of happen to be the same thing...so is it kind of acting as a covector...? I know I'm screwed up here somewhere... :?
    "There is a kind of lazy pleasure in useless and out-of-the-way erudition." -Jorge Luis Borges
    Reply With Quote  
     

  86. #85  
    Moderator Moderator
    Join Date
    Jun 2005
    Posts
    1,612
    So, if I have a map that sends a vector to its components, tell me what is the image of this map for some, say, 2-vector.

    Now tell me what is the result of applying a 2-covector to our 2-vector. Or even a 1-covector to a 2-vector. You do know, just figure
    Reply With Quote  
     

  87. #86  
    Moderator Moderator AlexP's Avatar
    Join Date
    Jul 2006
    Location
    NY
    Posts
    1,838
    We'd have something like , which generally for a covector would be where is the vector space... hopefully. So I'm saying the image is always a scalar ...

    EDIT: Something I just thought of. Not really on topic but at least it's related.

    Is true?
    "There is a kind of lazy pleasure in useless and out-of-the-way erudition." -Jorge Luis Borges
    Reply With Quote  
     

  88. #87  
    Moderator Moderator
    Join Date
    Jun 2005
    Posts
    1,612
    Hmm. It looks my my attempts to simplify have back-fired rather badly. So let's try this, talking generally.

    Recall we have a tangent bundle manifold that is locally where with our base manifold, and a projection .

    Recall also we had a local bundle over given by .

    If I define the homeomorphism I will find that, for I will have the image points .

    And by then applying the inverse I will find that mostly my original fibres are all jumbled up. I can deal with this to a degree by restricting the domain of my homeomorphism: , but I will still have all points of the form , so the composite is effectively a map .

    Obviously I discover whether by "comparing" their components on the same basis; so , say. Now when , where the subscripts denote the different coordinate systems, I will then have the exact same problem, since we know that the same vector on different bases must have different components, and I can easily devise a rule that tells me whether I am talking about the same vector on different coordinate basis, or different vectors on different coordinate basis. This rule will simply involve some arithmetic operation on the real n-tuples that are vector components

    Is that any clearer?

    PS - my EDIT:
    Quote Originally Posted by Chemboy
    EDIT: Something I just thought of. Not really on topic but at least it's related.

    Is true?
    Yes. By symmetry, by transitivity
    Reply With Quote  
     

  89. #88  
    Moderator Moderator
    Join Date
    Jun 2005
    Posts
    1,612
    I am beginning to fear I have scared people off by making this all look more difficult that it really is. In order to calm a few nerves, and to show how my two descriptions of the homeomorphisms (which are the elements in by fibre bundle's structure group) can be brought into register.

    We'll return to the tangent bundle over the 1-sphere . We know that is a manifold, since the arc-segment . We also know that a typical fibre So that the local bundle which is, of course, the 2-plane.

    This notation merely encapsulates the fact that, for , I may treat the two copies of as standard Cartesian coordinates. By restricting the domain of the homeomorphism to the point , I can picture this as fixing the point on one axis, so that simply describes a point on the line that runs parallel to the other axis, and the relationship between any pair of such points Note edits

    This is, of course, a real number line, so this relationship is found by arithmetic multiplication. This is the content of both my decriptions, and again shows that the structure group is the mulptiplicative groups or .

    Notice these guys are very nearly identical to a typical fibre in . So we will define the frame bundle as being the fibre bundle whose typical fibre is the set of all possible bases for the vector spaces tangent to a manifold . In the present case, this will be any non-zero element in , that is .

    This allows the further definition: a fibre bundle whose typical fibre is its own structure group is called a principle bundle. In the present case, the frame bundle and principle bundle coincide, but they need not.

    Principle bundles are very important creatures, especially in a branch of physics called "Quantum Field Theory". I could say more on them if anyone wants, but I shan't burden you otherwise
    Reply With Quote  
     

  90. #89  
    Moderator Moderator AlexP's Avatar
    Join Date
    Jul 2006
    Location
    NY
    Posts
    1,838
    I'd like a little bit of time ("little bit" being about a day) to get caught up. But I wanted to let you know I'm still very much in it. My heart leapt at the mention of QFT, so I'm particularly inspired now.

    EDIT: I'll be back with you tomorrow, but I've been busy today and it's time for bed. Promise I'll be back tomorrow.
    "There is a kind of lazy pleasure in useless and out-of-the-way erudition." -Jorge Luis Borges
    Reply With Quote  
     

  91. #90  
    Moderator Moderator
    Join Date
    Jun 2005
    Posts
    1,612
    Ack! That should be principal bundle. Doh...
    Reply With Quote  
     

  92. #91  
    . DrRocket's Avatar
    Join Date
    Aug 2008
    Posts
    5,486
    Quote Originally Posted by Guitarist
    DrRocket: Since I said that the following
    Quote Originally Posted by I
    then .
    is not a proof, I do not fully understand your point.

    However, irrelevant as it may be, let me say I don't think I agree with you. My "reasoning" (if that's what it was) was as follows. One may assume that . And if are independent, then . That is, I believe,

    My coordinates are independent by construction.

    You may disagree with with my use of Kroenecker, but, in the above, I was using him multiplicatively. That is, when one has that , and when one has that . Hence my last identity above.

    I do not see how this requires an IP or orthonormality.

    If I am wrong, I implore you, for the benefit of others, to say so.
    For starts, look at the result as you quoted it for an arbitrary vector space. In that setting it is pretty easy to see that you need the basis vectors to be orthonormal. Agreed ?

    In the setting that you are using in the last part of the discussion, you are actually mixing covariant and contravariant vectors, so you get an effect there that is analagous to orthonormality. It is not really an inner product that you are working with, but rather the application of a dual vector to a vector.
    Reply With Quote  
     

  93. #92  
    . DrRocket's Avatar
    Join Date
    Aug 2008
    Posts
    5,486
    Quote Originally Posted by Guitarist
    A "bunch" of time?? Really, you Americans crack me up sometimes!

    Anyway, I'm not sure I know the full answer to your question. Let's see.....

    By our definition of a manifold (locally homeomorphic to some ), we see that is trivially a manifold. You may take this to mean that "locally" is redundant, and the definition is "trivially" satisfied.

    Now is a manifold, locally homeomorphic to the 2-cylinder, which is just a "rolled up" 2-plane (homeomorphic to it, iow). But it is also globally homeomorphic to it, so, by the above, the definition is trivially satisfied, albeit at one remove, as it were.

    As to why the local homeomorphism is called "local triviality" I really cannot say - I just think of it as being a definition. Maybe that's wrong of me, dunno. Looking back,I see I didn't explain it very well - in fact I erred, which I will fix. Let's try again.

    Recall I said there is a projection and that the preimage , the fibre over . Suppose is a neighbourhood in . Then I should find that is the bundle over ie a local bundle. By the assertion that locally , I intended , and not what I first said.

    *blush*

    Anyway let's look at our recipe for a general fibre bundle . This a manifold of dimension , and will be called a fibre bundle if the following hold:

    there is a base manifold of dimension

    a typical fibre of dimension (all fibres being identical)

    a projection such that for , the fibre over .

    a structure group characteristic to

    local triviality given by .

    I need to say something more about our structure group, but first we will need to think about orientation. Now you will find several definitions out there, but we are just going to go with intuition. Let's stick with our 1-sphere.

    Let's assume we know what meant by (anti-)clockwise for a vanilla circle, and suppose that for any arc-segment of the circle, this still makes some sort of sense, Then, when we think of the 1-sphere as a manifold, these arc-segments are coordinate neighbourhoods. We will say that two such neighbourhoods are consistently oriented if they are both oriented (anti-)clockwise. If all neighbourhoods are consistently oriented, we will say are manifold is orientable.

    Now since our tangent vectors at, say are simply directional derivatives of the coordinate there, it is easy to see the tangent space "inherits" its orientation form the neighbourhood.

    Armed with that we are ready to proceed, but I need to pause for breath a while
    Out of curiousity, are you following some text in this development of the theory of differential manifolds ? If so, which text ? It is somewhat unusual to see principle fiber bundles discussed in an introductory treatment -- not wrong, just unusual. Usually the discussion is limited to the tangent bundle only.

    I did note the allusion to quantum field theory. For those interested, principle fiber bundles in physics are sometimes discussed under the heading of "gauge transformations". Basically the physicists, a while back, started working with their idea of gauge transformations and then discovered that mathematicians had been studying such things for years and called the objects of study principle fiber bundles.
    Reply With Quote  
     

  94. #93  
    Moderator Moderator
    Join Date
    Jun 2005
    Posts
    1,612
    After a bit of beard tugging I think I see where you're coming from. So I concede your points. Thanks for the steer.

    And, as to texts, I have several here (plus my tutorial notes) that I refer to when I need to remind myself of certain things, but mostly I am working from memory - that's why I make so many mistakes!!
    Reply With Quote  
     

  95. #94  
    Moderator Moderator AlexP's Avatar
    Join Date
    Jul 2006
    Location
    NY
    Posts
    1,838
    I really need some elucidation on what exactly and are. That's where I'm screwed up.
    "There is a kind of lazy pleasure in useless and out-of-the-way erudition." -Jorge Luis Borges
    Reply With Quote  
     

  96. #95  
    . DrRocket's Avatar
    Join Date
    Aug 2008
    Posts
    5,486
    Quote Originally Posted by Chemboy
    I really need some elucidation on what exactly and are. That's where I'm screwed up.
    You are talking here about fiber bundles. A fiber bundle is rather like a cartesian product, but maybe with a twist.

    Locally it really is a cartesian product, that is what is meant by local triviality. But globally is can really be twisted (more in a moment). So over a small enough open set looks like a Cartesian product, and over a point p is just the fiber.

    Lets take an example. Look at line bundles over the circle. So the base space is the circle and the fiber is a line. The trivial bundle, the cartesian product of a circle with a line is a cylinder. Over a small neighborhood U in the base space, an open line segment, is a cartesian plane and over a point p is a line, the fiber. Now consider the mobius strip, which is also a line bundle over a circle (the circle is the center line and think of the strip as being open (remember that an open interval is homeomorphic to the whole line). and look the same as for the trivial bundle (a reflection of local triviality) but the bundle itself has a twist and is in fact a non-orientable 2-manifold.
    Reply With Quote  
     

  97. #96  
    Moderator Moderator
    Join Date
    Jun 2005
    Posts
    1,612
    Yes well, I already covered precisely that in some detail, or so I thought. Maybe Chemboy is confused about something else?

    So. Chemboy, please be more specific with your question. Where in DrRocket's post and in my earlier ones, can you not find the answer you seek?

    Umm, thinking here... Surely you know what a surjection is, and what a preimage is in this context - I truly hope this is not your problem, but if it is, I guess we could whizz you through it.
    Reply With Quote  
     

  98. #97  
    Moderator Moderator AlexP's Avatar
    Join Date
    Jul 2006
    Location
    NY
    Posts
    1,838
    Quote Originally Posted by Guitarist
    This notation merely encapsulates the fact that, for , I may treat the two copies of as standard Cartesian coordinates. By restricting the domain of the homeomorphism to the point , I can picture this as fixing the point on one axis, so that simply describes a point on the line that runs parallel to the other axis, and the relationship between any pair of such points Note edits
    The is getting me... I've been seeing as a bunch of s, in which case I don't see it as being but rather . Is it really just a collection of all the s at ? I guess I'll just say this at the risk of sounding stupid... I took the preimage to be the "pre-projection." The projection was . So I took to consist of s, which I take to be like and not . I feel like apologizing though really I guess it's not my fault that I'm not getting this part of it, I really am trying... sorry anyway though.

    Happy Thanksgiving to anyone in the US who happens to read this.
    "There is a kind of lazy pleasure in useless and out-of-the-way erudition." -Jorge Luis Borges
    Reply With Quote  
     

  99. #98  
    . DrRocket's Avatar
    Join Date
    Aug 2008
    Posts
    5,486
    Quote Originally Posted by Chemboy
    Quote Originally Posted by Guitarist
    This notation merely encapsulates the fact that, for , I may treat the two copies of as standard Cartesian coordinates. By restricting the domain of the homeomorphism to the point , I can picture this as fixing the point on one axis, so that simply describes a point on the line that runs parallel to the other axis, and the relationship between any pair of such points Note edits
    The is getting me... I've been seeing as a bunch of s, in which case I don't see it as being but rather . Is it really just a collection of all the s at ? I guess I'll just say this at the risk of sounding stupid... I took the preimage to be the "pre-projection." The projection was . So I took to consist of s, which I take to be like and not . I feel like apologizing though really I guess it's not my fault that I'm not getting this part of it, I really am trying... sorry anyway though.

    Happy Thanksgiving to anyone in the US who happens to read this.
    It is a collection of (p,v)s but here there is only one p and the v's vary over all of the real numbers. So the collection is really just a copy of the real numbers, with a "p" subscript appended to each one if you like.
    Reply With Quote  
     

  100. #99  
    Moderator Moderator AlexP's Avatar
    Join Date
    Jul 2006
    Location
    NY
    Posts
    1,838
    You beat me to it. That's the conclusion I just came to myself. I'm going to give the stuff another try now that I've got that down.
    "There is a kind of lazy pleasure in useless and out-of-the-way erudition." -Jorge Luis Borges
    Reply With Quote  
     

  101. #100  
    . DrRocket's Avatar
    Join Date
    Aug 2008
    Posts
    5,486
    Quote Originally Posted by Chemboy
    You beat me to it. That's the conclusion I just came to myself. I'm going to give the stuff another try now that I've got that down.
    I think you have it. This sort of thing is useful to keep in mind. Mathematicians often take little shortcuts in their exposition that are simply understood. So when someone says the fiber is the real numbers, he means it is a homeomorphic (or diffeomorphic or whatever) copy of the reals. There might be another copy next door (as in the fiber over another point) that is also just called the reals unless it is necessary to very clearly make a distinction between the two fibers.

    Sometimes "is" means "is" and sometimes is just means "is isomorphic to".

    Differential geomety and manifold theory are shot through with this sort of thing. Sometimes great care is taken with the notation, and one gets lost in it and loses the intuitive notionof what is going on. And sometimes the notation gets a little sloppy and it is hard to figure out precisely what is going on, particularly on your first exposure. It only really starts to make sense when you know at internal level what is supposed to be happening and can express it rigorously yourself from that internal knowledge -- i.e you can move back and forth seamlessly from an intuitive description to a fully rigorous description. That doesn't happen quickly for anyone. I'm not there yet myself.
    Reply With Quote  
     

Page 1 of 2 12 LastLast
Bookmarks
Bookmarks
Posting Permissions
  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •