Notices
Results 1 to 14 of 14

Thread: Metric tensor

  1. #1 Metric tensor 
    Moderator Moderator
    Join Date
    Jun 2005
    Posts
    1,620
    Isn't the human mind a magical thing? I was walking the dogs, thinking about something else entirely (what to cook for dinner, if you must know) when the following just popped into my head; I hadn't consciously thought about it for ages.

    First some background. You'll find me here agonizing over notation.

    The inner product on a vector space V over the field K is given by v⋅w, or (v,w), for some v, w in V, depending on whatever floats your boat. But in a vector space with no inner product defined, (x,y) is nonetheless a perfectly respectable element in the space X x Y. That was the nub of my "agony", how to distinguish them notationally.

    The solution is simple and sweet. Look:

    Since the ordered pair (v,w) is an element in the space V x V, and if V is an inner product space, this implies a map V x V to K (since the inner product must be a number). For such a map I must find a gizmo, say g, s.t. g: V x V → K. I can, if I choose, write g<sub>vw</sub>(v,w) ∈ K, but I prefer to generalize:

    g: V x V → K, g<sub>ij</sub>(v<sup>i</sup>,v<sup>j</sup>) = k, for some k in K.

    But we also have a bilinear map V* ⊗ V*: V x V → K. (V* is the vector space dual to V) Could it be that g<sub>ij</sub> is an element in the space of bilinear forms V* ⊗ V*? Why yes! It must be, and as elements in V* ⊗ V* are type (0,2) tensors, then g<sub>ij</sub> is a contravariant tensor of rank 2..

    Now recall that, not only is g<sub>ij</sub>(v<sup>i</sup>, v<sup>j</sup>) some sort of "angle" when i ≠ j, it is also the "length" of v, its norm, ||v|| when i = j.

    So I call g<sub>ij</sub> the metric tensor.

    Yay!


    Reply With Quote  
     

  2.  
     

  3. #2  
    Forum Professor river_rat's Avatar
    Join Date
    Jun 2006
    Location
    South Africa
    Posts
    1,517
    has the tensor product ever been defined on this board Guitarist?


    As is often the case with technical subjects we are presented with an unfortunate choice: an explanation that is accurate but incomprehensible, or comprehensible but wrong.
    Reply With Quote  
     

  4. #3  
    Moderator Moderator
    Join Date
    Jun 2005
    Posts
    1,620
    Can't remember, lemme look back, I suspect I am the guilty party for introducing it though, and I should have defined it (recall I earlier used @ rather that ⊗ for this guy)

    Good on your guys in the rugby, by the way; not a good game on the whole, I think.

    Later
    Reply With Quote  
     

  5. #4  
    Moderator Moderator
    Join Date
    Jun 2005
    Posts
    1,620
    Well, I gave a sort of operational definition, let me try and be more pecise. First this:

    A map f:V → W is said to be linear if, for each v<sub>1</sub> and v<sub>2</sub> in V, f(v<sub>1</sub> + v<sub>2</sub>) = f(v<sub>1</sub>) + f(v<sub>2</sub>)

    and f(av) = af(v) some some scalar a

    The map g: X×Y → Z is said to be bilinear if the following is true:

    for each x<sub>1</sub>, x<sub>2</sub> in X and y<sub>1</sub>, y<sub>2</sub> in Y, that g(ax<sub>1</sub> + bx<sub>2</sub>, y) = ag(x<sub>1</sub>, y) + bg(x<sub>2</sub>, y) and g(x, ay<sub>1</sub> + by<sub>2</sub>) = ag(x, y<sub>1</sub>) + bg(x, y<sub>2</sub>) i.e. each argument is linear when the other is held constant.

    Definition: a bilinear map φ: V x W → X is called a tensor product iff for some ψ: V x W → Y there is an induced map τ: X → Y s.t. τ ⋅ φ = ψ.

    Now let L(V, X) denote the space of all linear maps V → X, and likewise L(W, X). Then, provided only that the construction L(V, X) ⊗ L(W, X): V x W → X satisfies the definition above, L(V, X) ⊗ L(W, X) is the tensor product of these vector spaces, with the property that h⊗k(v,w) = h(v)k(w) for all h in L(V, X), k in L(W, X) and (v,w) in V x W.

    Actually we really should show bilinearity. It's not hard, but it's somewhat tedious (it involves working with basis vectors)

    Now suppose V and W are vector spaces over R, and let L(V, R) and L(W, R) denote the spaces of all linear maps V to R and W to R, respectively. These, of course, we recognize as the dual spaces V* and W*. Then by the above, V* ⊗ W*: V x W → R, as desired.
    Reply With Quote  
     

  6. #5  
    The Doctor Quantime's Avatar
    Join Date
    Jun 2007
    Location
    United Kingdom
    Posts
    4,546
    What is the purpose behind this metric tensor and in what can it be used for?
    "If you wish to make an apple pie from scratch, you must first invent the universe". - Carl Sagan
    Reply With Quote  
     

  7. #6 Re: Metric tensor 
    Moderator Moderator
    Join Date
    Jun 2005
    Posts
    1,620
    You must have missed this bit
    Quote Originally Posted by Guitarist
    Now recall that, not only is g<sub>ij</sub>(v<sup>i</sup>, v<sup>j</sup>) some sort of "angle" when i ≠ j, it is also the "length" of v, its norm, ||v|| when i = j.

    So ....g<sub>ij</sub> {is} the metric tensor.
    Reply With Quote  
     

  8. #7  
    The Doctor Quantime's Avatar
    Join Date
    Jun 2007
    Location
    United Kingdom
    Posts
    4,546
    Wouldn't that be proportional?
    "If you wish to make an apple pie from scratch, you must first invent the universe". - Carl Sagan
    Reply With Quote  
     

  9. #8  
    Moderator Moderator
    Join Date
    Jun 2005
    Posts
    1,620
    Er...what's proportional to what, exactly?
    Reply With Quote  
     

  10. #9  
    Forum Professor river_rat's Avatar
    Join Date
    Jun 2006
    Location
    South Africa
    Posts
    1,517
    Hey Guitarist

    Did you learn the tensor product as a universal property first time round or as an explicit construction?

    The political vultures are already circling here regarding the rugby and jake white etc. He might be coaching england soon!
    As is often the case with technical subjects we are presented with an unfortunate choice: an explanation that is accurate but incomprehensible, or comprehensible but wrong.
    Reply With Quote  
     

  11. #10  
    Moderator Moderator
    Join Date
    Jun 2005
    Posts
    1,620
    Quote Originally Posted by river_rat
    Did you learn the tensor product as a universal property first time round or as an explicit construction?
    The latter. I only bumped into into the universal property when a pal and I started talking about category theory. Now that is an interesting subject, I have to read more. Kinda hard to explain without commutative diagrams, though.

    The political vultures are already circling here regarding the rugby!
    But why? You won the world cup! What more do the establishment want?
    Reply With Quote  
     

  12. #11  
    Moderator Moderator
    Join Date
    Jun 2005
    Posts
    1,620
    I'm surprised nobody has challenged me to to explain co- and contravariance. Having introduced you to the vector space of linear maps on a vector space, I think I can use these to explain this rather confusing concept (Imay not get completely finished, but let's make a start).

    First we need to know we're all on the same page regarding notation for composite maps. You should know this, but anyway.....

    Let A, B and C be sets, and let f: A → B and g: B → C.

    Evaluating f at some a in A I will find f(a) in B. Evaluating g at f(a) I will find g(f(a)) in C. Then the composite function A → C (using the rule of parentheses) is g ⋅ f. This is important in what follows.

    OK. So let A, B and C now be vector spaces (sorry about the switch). Let L(A, B) be the space of all linear maps A → B and let L(A, C) be the space of all linear maps A → C. In general let L(A, -) be the space of all linear maps from A.

    Now note this. We have vector spaces A, B etc. and maps (operators, transformations) A → B etc between them. We also have vector spaces L(A, B) etc, whose elements are simply maps, and we expect there to be linear maps between these as well. Cripes!! Maps of maps?? Yup.

    Suppose now that f: A → B and h: B → C. If we have a map B → C, we expect there to be some "h-related" map L(A, B) → L(A, C). I say "h-related" because h acts on B, not on L(A, B). Let's call this h<sub>∗</sub>: L(A, B) → L(A, C).

    We defined f as an element in L(A, B).

    I will then find that h<sub>∗</sub>(f) is an element in L(A, C). But (since f: A → B and h: B → C) this is just the same as h ⋅ f!

    Under these circumstances, we will define vectors in the space L(A, -) as being co-variant. h<sub>∗</sub> is called the "pushforward" of f along h.

    As I thought, got to run. Next up - contravariance.
    Reply With Quote  
     

  13. #12  
    Moderator Moderator
    Join Date
    Jun 2005
    Posts
    1,620
    OK, I had to go earn my keep, where woz I? Ah yes, contravariance.

    Suppose that L(_, A) denotes the space of all linear maps into A.

    Consider L(B, A) and L(C, A) where, as before (I think) h: B → C.

    Suppose that θ is a vector in L(B, A), i.e θ: B → A. Can I use h and θ to find a map C → A?. Um, vectors (maps) in L(B, A) have codomain A, the map B → C has domain B, so these cannot be composed.

    But... I have h: B → C and, say k: C → A, so these do compose: k ⋅ h: B → A, a vector (map) in L(B, A). We'll call this the pull-back of k along h, and it is formally equivalent to h*: L(C, A) → L(B. A) with h*(k) = k ⋅ h.

    Notice that in the covariant case I had h<sub>∗</sub>(f) = h ⋅ f, that is whenever h<sub>∗</sub> takes a map as argument, h composes from the left, whereas here, h*(k) = k ⋅ h, that is, whenever h* takes a map as argument it composes from the right!.

    So that in the present case, I'll say that vectors in the space L(-, A) are contravariant.

    Notice only that the vector space dual to V is V*, the space of all linear maps V → R, say, which is L(V, R), the dual to W is W* = L(W, R) and so on, so we have that vectors are covariant, dual vectors (or covectors, if you prefer) are contravariant.

    Now isn't that cute?
    Reply With Quote  
     

  14. #13  
    Forum Professor river_rat's Avatar
    Join Date
    Jun 2006
    Location
    South Africa
    Posts
    1,517
    Just check that Guitarist. I thought tangent vectors were contravariant and one-forms were covariant, but i haven't done geometry in a while.
    As is often the case with technical subjects we are presented with an unfortunate choice: an explanation that is accurate but incomprehensible, or comprehensible but wrong.
    Reply With Quote  
     

  15. #14  
    Moderator Moderator
    Join Date
    Jun 2005
    Posts
    1,620
    Yeah, this is an infamously vexed question as this discussion reveals. Basically, it seems that mathematicians use my convention, whereas physicists use yours. My work-through showed, at the very least, that there is at least some merit in the former

    But they are just that - conventions; nobody need get too heated either way.

    The main Wiki article is a mess, in my opinion, as the linked discussion illustrates
    Reply With Quote  
     

Bookmarks
Bookmarks
Posting Permissions
  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •