1. Well, OK, maybe my notation doesn't bother you as much as it does me.

Here's the thing: elements of the Cartesian products of sets, say AÃ—B are the ordered pairs (a,b), a from A, b from B. In an inner product space, the scalar (v,w) is certainly an ordered pair, an element of VÃ—V. But not all spaces have an inner product defined, so if, for one such space, I write (x,y) as an element of WÃ—W we can take it that (x,y) is not an inner product.

I'm going to change notation for a bit, and write [v,w] when I want to specify that this is an inner product. Where it makes no difference, I will simply use (v,w), which may or may not be an inner product, it's irrelevant for the purpose at hand (this, I think, will be usually).

The other problem that may have escaped you is this. I'm writing fv for the element of V*, the vector dual to the vector v in V. Suppose, though, I now say that fv(v) = ||v|| for all v in V, it isn't clear that, in this case, I must insist that f[/sub]v1[/sub] is dual only to v1, which is a very cumbersome way to proceed. But here I don't see an easy way out.

We'll have to live with it, so let me emphasize: the "f" has no real meaning, and simply means "some functional" that takes V → R. As the spaces V and V* are isomorphic (provided only that V is finite), this implies that for each v in V there is one and only one fv in V*. Maybe you already made that leap, if so, good on ya.

And finally: the form fv⊗fw: VÃ—W → R I wrote earlier was for illustration only, and not a good illustration at that - I urge you to erase it from your mind, it muddies the waters.

Tensors are formed exclusively from the multilinear elements of the space V⊗...⊗V⊗V*⊗...⊗V* acting on V*Ã—...Ã—V*Ã—VÃ—...Ã—V, i.e multiple copies of some space and its dual.

2. Originally Posted by Guitarist
We'll have to live with it, so let me emphasize: the "f" has no real meaning, and simply means "some functional" that takes V → R. As the spaces V and V* are isomorphic (provided only that V is finite), this implies that for each v in V there is one and only one fv in V*. Maybe you already made that leap, if so, good on ya
thinking in terms of the inner product fv(v) ment that i assumed that from the start, since the inner product isn't the only bilinear function of that sort i guess that it can be taken that all the other bilinear functions satisfy that condition or i got lucky.

Originally Posted by Guitarist
Tensors are formed exclusively from the multilinear elements of the space V@...@V@V*@...@V* acting on V*Ã—...Ã—V*Ã—VÃ—...Ã—V, i.e multiple copies of some space and its dual
now that the other 'illustration' has been put out of mind shall we be going into the above in more detail?

3. OK, but first let's be clear what's going on here. The issue of inner products is totally irrelevant here, if it helps us to interpret fv: V → R as an inner product, it will do no harm, as far as I can see. So let's stick to the simple case:

The tensor product of all elements of V and all elements of V*, of which, say, v⊗fv is one, acts on the Cartesian product of their target spaces V*Ã—V. Say (fu, u) is an element in V*Ã—V. It is in the nature of the tensor product that v⊗fv: V*Ã—V → R that this operation yields v⊗fv(fu, u) = v(fu)fv(u).

We see straight away that, for n copies of our multilinear function, there are n elements in the codomain. This means that, in the present case, we have defined a rank 2 tensor, v⊗fv.

We also see straight away that v⊗u: V*Ã—V* → R is also a rank 2 tensor, so, as these guys are clearly different, we'll call them type (1,1) and type (2,0) tensors, respectively, it being understood that the first slot denotes the vector component, the second the dual vector component. Now, we could leave it here, but for two things: nobody else in the world seems willing to, and with a small adjustment to notation, tensor algebra becomes, superficially at least, rather easy.

So, let's at least make a start. We agreed to describe a vector v = aiei. Let's now follow the herd, and raise the index on a, the scalar component on e, the basis vector thus: v = aiei. But, since, for a given basis, all we really care about are the components ai, let's drop all reference to the basis: v = ai. But we also agreed that the choice of basis is arbitrary. As we don't want our vector v to change merely because of our promiscuity with bases, then the ai don't mean very much, they could be anything, depending on the basis.

So, we may as well write vi as our vector, it being understood that this means our vector is such that, for some choice of basis, there is an induced choice of scalar components such that vi is fully determined by the ith component on the ith basis vector.

And finally: I said a rank 1, type (1,0) tensor is a vector. It is (probably, lemme check) not true that all vectors are type (1,0) tensors. So, in the case that our vector vi is a type (1,0) tensor, let's emphasize that fact, and call it Vi. Yay!, our very first tensor, aren't we cool!

Out of puff, more later, maybe.

4. Originally Posted by Guitarist
The tensor product of all elements of V and all elements of V*, of which, say, v@fv is one, acts on the Cartesian product of their target spaces V*Ã—V. Say (fu, u) is an element in V*Ã—V. It is in the nature of the tensor product that v@fv: V*Ã—V → R that this operation yields v@fv(fu, u) = v(fu)fv(u).
hmmm. right hand side of the equation v@ffv(fu,u) = v(fu)fv(u) brings matrix multiplication to mind.

Originally Posted by Guitarist
And finally: I said a rank 1, type (1,0) tensor is a vector. It is (probably, lemme check) not true that all vectors are type (1,0) tensors. So, in the case that our vector vi is a type (1,0) tensor, let's emphasize that fact, and call it Vi. Yay!, our very first tensor, aren't we cool!

Out of puff, more later, maybe.
so in the case of say v*→ R if we could take our tensor vi and use it to act on an element of V*, or must we insist on a cartesian product between v* and some zero vector space, and aquire vi(fv as a result. which looks oddly familiar as a result.

5. Originally Posted by wallaby
hmmm. right hand side of the equationv@ffv(fu,u) = v(fu)fv(u) brings matrix multiplication to mind.
Why sure, good. But you rather quickly, say in the case of type (n,m) tensors, get into hypermatrices, things you can't picture, let alone draw.

so in the case of say v*→ R if we could take our tensor vi and use it to act on an element of V*, or must we insist on a cartesian product between v* and some zero vector space, and aquire vi(fv) as a result. which looks oddly familiar as a result.
I'll draw a discrete veil over "v* → R", you know better than that. But you can think of a vector space as the Cartesian product of that space and a "zero space", whatever that might be, but why bother.

We have fv: V → R, u |→fv(u) as a familiar result, as intended. What might be less familiar, because I rather skated over it, is the form v: V* → R, fu|→ v(fu). Can you see why, or should I go through it again.

I guess the point is, the above renditions are quite familiar, and the higher rank tensors are merely a generalization of this. I chose to argue from the higher rank case to the lower, as I thought it would be easier to follow. Others may disagree.

Anyway, good work.

EDIT: I think I may have given a slightly bum steer over the inner product. I don't think it's too serious, but we can talk about it if you want. The issue basically revolves around whether [ , ] in this simple form is bilinear or not. I now think this is only true over the real field. If we want full generality, I think we may have to insist that this is only true if [ , ] is an element of V*⊗V, that is to say, if an inner product exists, it is defined by some fu(v). As I say, I don't think it's too serious.

6. So, we now have some rather boring housework to do, so get your pinny on!

First, note that, for the multilinear tensor product of functions on the Cartesian product of spaces, there are two possible conventions (I confess I've been muddling them slightly). Let's stick with this: for each element of V acting as a function on V*, and each element of V* acting as a function on V, I will write, for example, V⊗V⊗V* acts on V*Ã—V*Ã—V, it being understood that elements of V are vectors and elements of V* are now called covectors.

So, the above defines elements of V⊗V⊗V* as type (2,1) tensors, the first slot being reserved for vector components, the second for the covector components. Agreed?

We also agreed to write our type (1,0) tensor = vector = function: V* → R as Vi. By a similar (and not interesting) argument, I hope nobody will dispute the reasonableness of calling a type (0,1) tensor = covector = function: V → R as Vj.

Let's get daring, and consider some arbitrary tensors Ap and Bq. We know what it means to add vectors - we get another vector, right? So Ap + Bq = Cr, another vector. Similarly for covectors.

This generalizes: Apq + Brs = Ctu and so on. Note we are not changing the rank or type here.

What about multiplication? S'easy, just remember how we defined our tensors! AiBj = Ckl. So, multiplying two rank n tensors gives a rank 2n tensor. Hmm. What about adding a type (1.0) tensor and a type (0,1) tensor? No way, and in general tensors of different types cannot be added. (The model here is matrices)

But we can multiply tensors of different types AijBk = Cpqr (note I'm careful with my indices (= vector/covector components, as they are bound to change under these operations).

Now here's something cool, but familiar, I hope. The operation AkBk = Chh, by the above. It only remains for me to remind you that if, say Ak = v and Bk = fu, you will no difficulty seeing that this is ?? First one to tell me gets a sweetie (or a Toklas brownie, if they prefer!)

7. Originally Posted by Guitarist
So, we now have some rather boring housework to do, so get your pinny on!

First, note that, for the multilinear tensor product of functions on the Cartesian product of spaces, there are two possible conventions (I confess I've been muddling them slightly). Let's stick with this: for each element of V acting as a function on V*, and each element of V* acting as a function on V, I will write, for example, V@V@V* acts on V*Ã—V*Ã—V, it being understood that elements of V are vectors and elements of V* are now called covectors.

So, the above defines elements of V@V@V* as type (2,1) tensors, the first slot being reserved for vector components, the second for the covector components. Agreed?
Agreed

Originally Posted by Guitarist
But we can multiply tensors of different types AijBk = Cpqr (note I'm careful with my indices (= vector/covector components, as they are bound to change under these operations).

Now here's something cool, but familiar, I hope. The operation AkBk = Chh, by the above. It only remains for me to remind you that if, say Ak = v and Bk = fu, you will no difficulty seeing that this is ?? First one to tell me gets a sweetie (or a Toklas brownie, if they prefer!)
hmm well,
AkBk=vFu, however we want another tensor here and not a scalar hence we have a tensor product.
v@fu=Chh. in this case Chh would be a rank 2 tensor, as defined earlier. that is in the case that Ak=v, Bk.

thats what i make of it.

8. Originally Posted by wallaby
hmm well,
AkBk=vFu, however we want another tensor here and not a scalar hence we have a tensor product.
Aha! but these are not incompatible statements - a scalar is a type (0,0) tensor! Remember that scalars, vectors (and tensors) not only result from actions of various sorts, they also act. Maybe the question was a little unfair without more info, sorry. You had the answer, then ran away from it. Observe:

AkBk = Chh = v⊗fu: V*Ã—V → R, (fu, v) |→ v⊗fu(fu, v). By bilinearity, this is v(fu)fu(v) = fu(v)fu(v), we know each factor here is scalar and the multiplication of two scalars is another scalar. As a rank (0,0) tensor is defined to be scalar, we conclude that Chh = C.
This "cancellation law" is quite general - the same indices both raised and lowered cancel exactly after summation (remember we're still assuming summation). It's referred to as tensor contraction and is, as far as I'm aware, unique to tensors. There's subtlety here we can talk about, maybe should talk about another time.

So, no sweetie for you, but as it was a brave attempt, suck on this stone instead.

9. Originally Posted by Guitarist
Aha! but these are not incompatible statements - a scalar is a type (0,0) tensor! Remember that scalars, vectors (and tensors) not only result from actions of various sorts, they also act. Maybe the question was a little unfair without more info, sorry.
so then could we define a vector space in terms of a field of rank (0,0) tensors? well actually when i think about it it would seem that if this were true it would be a special case of a tensor space. but thats if its true and my mind is not running away.

Originally Posted by Guitarist

AkBk = Chh = v@fu: V*Ã—V → R, (fu, v) |→ v@fu(fu, v). By bilinearity, this is v(fu)fu(v) = fu(v)fu(v), we know each factor here is scalar and the multiplication of two scalars is another scalar. As a rank (0,0) tensor is defined to be scalar, we conclude that Chh = C.
This "cancellation law" is quite general - the same indices both raised and lowered cancel exactly after summation (remember we're still assuming summation). It's referred to as tensor contraction and is, as far as I'm aware, unique to tensors. There's subtlety here we can talk about, maybe should talk about another time.

So, no sweetie for you, but as it was a brave attempt, suck on this stone instead.
having not known where that stone came from i shall save it for later.

10. Originally Posted by wallaby
so then could we define a vector space in terms of a field of rank (0,0) tensors?
Ah no. Don't confuse fields with spaces, they are not the same thing at all (though they can be related to each other via the notion of "bundles"). But you could have a special case of a scalar field that was a field of type 0 tensors, I suppose. But, just as I am reasonably confident that the definition of a rank 1 tensor as a vector is a one-way street, that is, not all vectors are type 1 tensors, so I think the same is true for rank 0 tensors and scalars. I haven't thought about it deeply very recently, but I suspect all will come good when (if) we get to tensor transformations
having not known where that stone came from i shall save it for later.
Good, has the same effect on your teeth as sweets, but rather more rapid

11. Originally Posted by Guitarist
Ah no. Don't confuse fields with spaces, they are not the same thing at all (though they can be related to each other via the notion of "bundles"). But you could have a special case of a scalar field that was a field of type 0 tensors, I suppose. But, just as I am reasonably confident that the definition of a rank 1 tensor as a vector is a one-way street, that is, not all vectors are type 1 tensors, so I think the same is true for rank 0 tensors and scalars. I haven't thought about it deeply very recently, but I suspect all will come good when (if) we get to tensor transformations
can't wait

12. Originally Posted by wallaby
can't wait
I'm afraid you're going to have to (wee, is that English?)

Anyway, I'll tell you why you have to wait. First, the transformation of tensors requires an understanding (at the very least) of partial derivatives, do you know what this means?

Second, tensor transformations are really only relevant when you think about manifolds. I suspect that river_rat is headed in that direction, and he is far more knowledgeable than I on this.

But finally, and please don't take offense (as you seem like a very nice and very smart guy). But... I cannot convince myself that you have fully understood the concept of a tensor space; don't just rush forward without that understanding, as you seem to have done a bit of that recently.

As an exercise for yourself (not me) try to put into words, no symbols, what your understanding is of a tensor space. I assure you, it ain't easy, but give it a try.

13. Originally Posted by Guitarist
I'm afraid you're going to have to (wee, is that English?)
it's about as english as woo

Originally Posted by Guitarist
Anyway, I'll tell you why you have to wait. First, the transformation of tensors requires an understanding (at the very least) of partial derivatives, do you know what this means?
i can't say i know it to the degree that one might consider to be a product of education on the subject. however Gas Laws and the volumes of various shapes do seem to be common, and understandable, examples that i have seen here and there.

i guess if you asked me to define it in words i'd have to say:
"the derrivative of a multivariate function with respect to one variable."
but i don't know how mixed partials work, never dealt with partial differential equations, i dare say i could flip through the calculus book beside me and find out.

Originally Posted by Guitarist
Second, tensor transformations are really only relevant when you think about manifolds. I suspect that river_rat is headed in that direction, and he is far more knowledgeable than I on this.
aren't manifolds more of a topology thing?
i'd have to say i know absolutely nothing of topology and couldn't follow the topology primer thread.

Originally Posted by Guitarist
But finally, and please don't take offense (as you seem like a very nice and very smart guy). But... I cannot convince myself that you have fully understood the concept of a tensor space; don't just rush forward without that understanding, as you seem to have done a bit of that recently.

As an exercise for yourself (not me) try to put into words, no symbols, what your understanding is of a tensor space. I assure you, it ain't easy, but give it a try.
upon reflection i'd have to agree with you.

i'll think about tensor spaces, or tensors in general, for a while.

14. Originally Posted by wallaby
aren't manifolds more of a topology thing?
i'd have to say i know absolutely nothing of topology and couldn't follow the topology primer thread.
They are more of a geometry thing actually Wallaby - and dont worry i am planning to take a very pedestrian route into manifolds and will leave most of the hairy technical stuff out if you want

As soon as ive finished Urysohn's lemma with guitarist i will start a manifold thread

15. Originally Posted by river_rat
They are more of a geometry thing actually Wallaby - and dont worry i am planning to take a very pedestrian route into manifolds and will leave most of the hairy technical stuff out if you want

As soon as ive finished Urysohn's lemma with guitarist i will start a manifold thread
geometry's good, if we can sacrifice most of the hairy technical stuff and still keep the moral of the story then i'm all for it.

16. Originally Posted by wallaby
i guess if you asked me to define it in words i'd have to say:
"the derrivative of a multivariate function with respect to one variable."
Well, one variable at a time, but yes, that's it. I have no intention of giving a calculus lesson, but I just want to make you aware of a couple of things. Suppose you have a function dependent on two variables simultaneously, say f(x,y) = z. First hold y constant (and you know what that means for differentiation, right?), and differentiate with respect to x. On writes ∂z/∂x. Now hold x constant and differentiated for y, you have ∂z/∂y.

First thing to say is that, in spite of their slightly alarming appearance, these guys are handled in exactly the same way as any other differentiation.

Second is, note that, for each variable, you have a separate partial derivative; sometimes you know how to fit these guys together (add them, multiply etc) only rarely you don't.

Third is that you often see ∂/∂x(f) and ∂/∂y(f) as notation; it means the same thing, of course.

Finally, observe this. Suppose we have an arbitrary vector transformation f on a Euclidean 3-space, with Cartesian coordinates xi, i = 1, 2, 3. If we want to describe the increments of this transformation, derivatives would be nice, and it shouldn't be too hard to see that we want the derivative of f(xi) = Σ∂f/∂xi.

So that what we will want the partials for. Anyway, enough of that.

An obvious question is this: given a vector space of tensors, how do we find a basis for this space? On the face of it, the answer is simple. Consider the space of type (1,1) tensors, these being bilinear objects of the form v⊗fu. Then we know that for each v in V, we have basis vectors ei such that v = aiei, and that for each fu in V* we have fu = bjεj.

Then, by the construction of the space of type (1,1) tensors, we will have a basis given by ei⊗εj, with scalar coefficients ai and bj, respectively.

Annoyingly, perhaps, when dealing with basis transformations, it will turn out to be more profitable to return to coordinates. But later for that.

Hey! How about Ireland vs Pakistan in the World Cup? Much celebration chez nous, as my wife is Irish, with oceans of Guinness consumed!

i'll think about tensor spaces, or tensors in general, for a while.
How's that going? Any questions?

17. Originally Posted by Guitarist
How's that going? Any questions?
in a bit of a rush but i'll add this first and come back to the first bit.
well its given me something to do when i'm bored.
but as far as results go...well i'm not entirely sure.
if i've come up with anything so far this is most likely it:

Tensor Space: a collection of objects called tensors formed out of the tensor product of two vector spaces.
however i would like to have thought of a definition not requiring the use of tensor related concepts to whatever extent possible, and i would have liked to have thought of something more [can't pick the word] definative or correct?

of to school to think some more.

EDIT: ok well now that i'm home i guess i should elaborate on my line of thinking so we can see where it may be wrong.

earlier we defined the tensor product of two vectors as being a second rank tensor. i'm wondering if you can apply the idea generally to the tensor product of two vector spaces in order to obtain a tensor space.

then theres the question of what about rank 0 tensors? might they be the tensor product of two fields or can we do a little better than that?
i'm sure that by this definition, if its right of course, we can say that the tensor space is constructed over the vector spaces to which it is a product of.

anywho i think there's enough there to see what i think of tensor spaces.

the partials seem pretty simple so onto this lot.
Originally Posted by Guitarist
Finally, observe this. Suppose we have an arbitrary vector transformation f on a Euclidean 3-space, with Cartesian coordinates xi, i = 1, 2, 3. If we want to describe the increments of this transformation, derivatives would be nice, and it shouldn't be too hard to see that we want the derivative of f(xi) = Σ∂f/∂xi.

So that what we will want the partials for. Anyway, enough of that.
the function f(xi) i take it would describe one of the cartesian variables in terms of the others.

Originally Posted by Guitarist
An obvious question is this: given a vector space of tensors, how do we find a basis for this space? On the face of it, the answer is simple. Consider the space of type (1,1) tensors, these being bilinear objects of the form v@fu. Then we know that for each v in V, we have basis vectors ei such that v = aiei, and that for each fu in V* we have fu = bjεj.

Then, by the construction of the space of type (1,1) tensors, we will have a basis given by eij, with scalar coefficients ai and bj, respectively.
that makes sense. although while on the topic of a basis i remember, back when we had just started and i looked into the kronecker delta a little more, i found the wikipedia article on it amongst other articles at tensors. i remember as soon as i saw the word tensor i stopped looking any further. so is there some connection between the Kronecker Delta and tensors?

Originally Posted by Guitarist
Annoyingly, perhaps, when dealing with basis transformations, it will turn out to be more profitable to return to coordinates. But later for that.

Hey! How about Ireland vs Pakistan in the World Cup? Much celebration chez nous, as my wife is Irish, with oceans of Guinness consumed!
missed that, trust assignments to issolate you from the news. enjoy the celebrations.

18. Originally Posted by wallaby
Tensor Space: a collection of objects called tensors formed out of the tensor product of two vector spaces.
however i would like to have thought of a definition not requiring the use of tensor related concepts to whatever extent possible, and i would have liked to have thought of something more [can't pick the word] definative or correct?
Your definition of a tensor space is correct, good. As to your other comment, are you objecting to the use of the term "tensor product" in the definition? Like you think it's sorta self-fulfilling? Fine, call it the outer product if you want. Incidentally, when I said earlier that tensors can be multiplied, I should said explicitly that this actually means using the tensor product thus: (v⊗fu)⊗(u⊗fv) = v⊗u⊗fu⊢fv: V*Ã—V*Ã—VÃ—V → R

earlier we defined the tensor product of two vectors as being a second rank tensor. i'm wondering if you can apply the idea generally to the tensor product of two vector spaces in order to obtain a tensor space.
I don't understand your point, that's exactly what we did! Look: if I write f: X → Y, these being sets, I really mean take each x in X and find a y in Y s.t. f(x) = y. I work component-wise to find Y from X. Or the Cartesian product of sets AÃ—B takes each a in A and each b in B and pairs them off, one by one through the whole set. Likewise, v⊗fu refers to each v in V and each fu in V*, which taken in all pairwise combinations make up the space V⊗V*, the tensor space.

then theres the question of what about rank 0 tensors?
Ok, I didn't introduce these very well. A vector space is a set |V| of abstract objects together with an associated field K. The spaces V and V*, being duals, must associate with the same field, just the one. If |V| and |V*| are empty sets, we are simple left with the field. Or you can just take as a definition that a type (0,0) tensor is scalar.
i'm sure that by this definition, if its right of course, we can say that the tensor space is constructed over the vector spaces to which it is a product of.
Hmm, I think I'd be more inclined to say it is over V, rather than over V and V*. Not sure, though, lemme check.

so is there some connection between the Kronecker Delta and tensors?
Yes - it is a tensor! (of a rather special sort).

19. Originally Posted by Guitarist
Your definition of a tensor space is correct, good. As to your other comment, are you objecting to the use of the term "tensor product" in the definition? Like you think it's sorta self-fulfilling? Fine, call it the outer product if you want. Incidentally, when I said earlier that tensors can be multiplied, I should said explicitly that this actually means using the tensor product thus: (v@fu)@(u@fv) = v@u@fu@fv: V*Ã—V*Ã—VÃ—V → R
yes it was a rather irrelevent and superficial statement to make and yet for some reason i put it in anyway. tensor product or Boot it really doesn't matter when we use V@V*.

Originally Posted by Guitarist
Ok, I didn't introduce these very well. A vector space is a set |V| of abstract objects together with an associated field K. The spaces V and V*, being duals, must associate with the same field, just the one. If |V| and |V*| are empty sets, we are simple left with the field. Or you can just take as a definition that a type (0,0) tensor is scalar.
i see now

Originally Posted by Guitarist
Yes - it is a tensor! (of a rather special sort).
interesting

Page 2 of 2 First 12
 Bookmarks
##### Bookmarks
 Posting Permissions
 You may not post new threads You may not post replies You may not post attachments You may not edit your posts   BB code is On Smilies are On [IMG] code is On [VIDEO] code is On HTML code is Off Trackbacks are Off Pingbacks are Off Refbacks are On Terms of Use Agreement