Hi all. I'm intending to study GR and I need some good tips on tensors and Riemannian geometry. So, any tips? :D

Hi all. I'm intending to study GR and I need some good tips on tensors and Riemannian geometry. So, any tips? :D
Ya, well, I did attempt to explain tensors a year ago on this very site, I can't find the thread just now, but, as I recall it came at the end of a long tutorial on vector spaces,
I would be willing to start up again in a new thread, but only on the assumption that you knew all there is to know about inner product spaces....
the Riemann geometry would be less easy to explain (not that I know too much about it), but you would certainly need to know all about topological spaces and manifolds, as a minimum.
Are you cool on these "prerequisites"?
I warn you  at a mathematical level, GR is fiendishly hard. It is not for the illprepared!!
Would be really great if you could explain tensors. I have this book, see, and the very first chapter dives straight into tensors without properly saying what these things are supposed to be.
Yes, and I already said I would be willing to help. But you need to tell us
a) do you have have a working knowledge of vector spaces;
b) do you know what an inner product space is;
c) are you familiar with dual vector spaces;
d) know what a Cartesian product is;
e) know what the tensor product is?
You may, if it is true, answer NO to all the above, then I would still be willing to help, but mainly by pointing to my earlier thread.
But I say again, the mathematics of GR is fiendish.
a) yes
b) no
c) no
d) yes
e) no
Thanks!
Faldo_Elrith,
I may be able to help a little, but more in the role of a fellow student.
Put quite simply a tensor is a generalization of the idea of vectors and matrices and if you are used to the index notation for vectors and matrices then tensors are a pretty natural extention of these.
It sounds like Guitarist is offering the more mathematically rigorous explanation.
By the way what is the GR text you are talking about.
Guitarist,
I am pretty much a selftutored novice when it comes to GR and so there may be much that I can learn from you. I worked through the some of the basic calculations in Schutz' book, but not all that recently. I have a masters from the University of Utah, but they really didn't have much to offer in GR. When it comes to playing a teaching role in relativity I pretty much stop at SR.
Surely Faldo_Elrith you know what an inner product is. Guitarist isn't this pretty much just vector space plus inner product.b) do you know what an inner product space is;
Off the top of my head, I would guess this is basically an extension of the vector space (or rather an inner product space) to include the distinction made between row and collumn vectors (and where the inner product is an operation only between the two different types of vectors).c) are you familiar with dual vector spaces;
I believe this is just the space of ordered pairs of elements of the two spaces you are taking the Cartesian product of.d) know what a Cartesian product is;
This is just a tensor formed by multiplying the components of two tensors, where none of the indices of any of the tensors are the same. An outer product of a row vector with a collumn vector to produce a matrix is an example of this.e) know what the tensor product is?
The usual multiplication of matrices and vectors, including the inner product between a row vector with a collumn vector is a contraction of a tensor product.
OK, mitch & Faldo; So we all know what we mean by a vector space. Good.
Let's first review what we mean by an inner product space.
Let be a vector space, and let . Let be the Cartesian product on this space (strictly, it's the Cartesian product on the underlying set. Don't worry)
Then we we have that
I define the inner product on as . In other words, the inner product of 2 vectors is a number (it need not be real, by the way)
An inner product space (IPS) is just a vector space where this construction makes sense.
Let us now find some gadget that takes a vector as argument and returns a number, say . This guy is called a "linear functional", and it (and its mates) have the property that
, and
This (with a couple of trivial extras) is sufficient to define as elements in a vector space. It is called the dual space V* to V, and, where V is an IPS, we have following;
for any appropriate choice of .
One says that is the dual vector to . This is a onetoone correspondence between V* and V  an isomorphism.
Lemme know if we're all cool with this so far, as it is absolutely fundamental
P.S Oh mitch I believe Schutz uses a slightly different notation/terminology for dual vectors. Doesn't he call them 1forms?  we can talk about that if you like
I am confused by this last parenthetical. Is not your funny looking R the set of real numbers? Are you restricting yourself to inner products that are real but saying that you do not have to?Originally Posted by Guitarist
Ok, now you have lost me.Originally Posted by Guitarist
Do you mean that for any there exists a such that
Are you then implying that this existence of a for every represents a 11 correspondence between this vector space of linear functionals and the original vector space V  i.e. that given any two linear functionals and the corresponding (such that for any , and ), you can prove that u=v if and only ifOriginally Posted by Guitarist
So because of this 11 correspondence you are now calling this linear vector space of linear functionals the dual vector space of V (denoted by V*)?
At first this index of u on psi in confused me but I guess this is just a way representing this 11 correspondence and you would read as the linear functional corresponding to u.
You are correct but I am hardly glued to that one book, I have studied Wald and have struggled with Hawking&Ellis (The Large Scale Structure of Space Time) so I am familiar with their terminology though I do not claim anything approaching a mastery of these texts.Originally Posted by Guitarist
Yes  for now it is is just for simplicity. If we were in the Math forum, I would simply write the ambiguous field . But, since we are in Physics, we require all "measurements" to be real numbers (yes the inner product and its big brother the vector norm are measurements)Originally Posted by mitchellmckain
Um, not quite, but close. Turn it around: for any such that , there is some such that .Do you mean that for any there exists a such that
It was merely to emphasize this correspondence that I wrote  it is not standard
I would prefer to think of it as the other way round  the existence of some for each . But hey  it's an isomorphism, so who cares? You are not wrong!Are you then implying that this existence of a for every represents a 11 correspondence between this vector space of linear functionals and the original vector space VYep, spot on! Nice workthat given any two linear functionals and the corresponding (such that for any , and ), you can prove that u=v if and only if
Correct.So because of this 11 correspondence you are now calling this linear vector space of linear functionals the dual vector space of V (denoted by V*)?
Then we can either wait for Faldo_Elrith's reply or you can continue, because I am with you.
Wow this is great!
Guitarist, can you explain what you mean by "linear functionals"? I'm with you up to that bit there.
Cheers mate.
Well since Guitarist has not answered right away, I will attempt to anwer that. I think it basically means these properties which Guitarist already gave.Originally Posted by Faldo_Elrith
Notice that matrix multiplication has these same properties. That is if A and B are matrices then (A + B) v = Av + B v and for a scalar a, we also have that A (a v) = a (A v).Originally Posted by Guitarist
Yeah, I skated over that a bit. mitch was as right as he could be, given the limited information I provided.Originally Posted by Faldo_Elrith
First this, though. A "functional" is really just a special sort of function, one that takes a vector (or a function, for that matter) as input and returns a scalar (a number) as output.
I tend to be sloppy, and use the word "map" for anything that can be thought of as an arrow.
Now to show linearity, we really need this. If any mappings whatever, say , satisfy
for any scalars , one say they are linear.
So. When we are handed, say, a vanilla realvalued function we would be inclined to write . We would usually take this to mean that this function acts on each and every real number and returns another real number.
In the present case, we have a whole vector space V* of linear functionals, in fact we have one such functional for each input vector. So we may write it like this;
There is a mapping such that, for all , we will have .
Now if an inner product is defined on , by the isomorphism I can always find some such that .
So, this is all just a repeat of what I said in an earlier post. For novelty, let me add this:
is a perfectly respectable vector space. As such it is entitled to its own dual space, which we may write as . We will expect that the action of V** on V* to be similar to the action of V* on V, that is to return a scalar.
Now, it is a classic result in linear algebra that, whereas the isomorphism is not "natural" (in the sense that it depends on an arbitrary choice of basis for ), there is a natural isomorphism .
This is going to allow us to do something useful
Doesn't this have to do with transformation properties like the difference between covariant and contravariant?Originally Posted by Guitarist
So for a transformation f on V:
the linear functionals must transform like
so that
Doesn't this difference in transformation properties result in making the commutivity diagram requirement for natural isomorphisms impossible if it is between linear vector spaces of objects that transform in these two different ways?
Guitarist, I still don't fully get you.
Well, that bit I understand. But elsewhere it seems that you are also trying to add functionals! How can you do that? You can add vectors, you can add scalars, but you CANNOT add functions.Originally Posted by Guitarist
You can, if you properly define your operations. That was what Guitarist was doing here:Originally Posted by Faldo_Elrith
(The second equation is not very well written – it should be .) Now, you have these functions (or replace with another field if you’re working with nonreal vector spaces). Let be the set of all such functions. Now define addition and scalar multiplication on as follows: for any and , defineOriginally Posted by Guitaristfor any . It then follows that is a vector space with respect to these operations. The members of this vector space are called linear functionals, and this vector space is called the dual space of V.
Well, that’s how I’d explain it – but I hope I’ve not made things more complicated than before.
Whoa there mitch, slow down.
We are not talking transformations here. We can come to them in due course, but we have bit of work ahead of us yet (Actually, in the present context, I recognize 2 different, but subtly related notions of "transformation"  linear transformations, aka linear operators, and coordinate transformations.)
While we're at it, you will never hear the words co and contravariant in this context from me. So get used to it  I have my reasons!!)
So we have that as a natural isomorphism. Without too much bending of the rules, this allows me to use whenever I see . Let's do that.
By the definition of the dual space,we expect that , which I will exchange for .
With a grinding of gears, allow me to form the Cartesian product , whose elements ("vectors" ) are the ordered pairs of the form , and ask what sort of map can act on these guys.
Recall we had a definition of linearity of a map earlier. We are going to insist that the map that acts on is linear in each argument when the other argument is held constant; this called bilinearity. I shan't give a technical rundown, as the bilinearity we shall be dealing will be very familiar to you
So, in the present case, for all I define the bilinear map by .
Now this all looks terribly grownup doesn't it. I can assure you it is childishly simple. Look
We started with a gadget, a dual vector, whose pleasure in life is to look at a single vector and offer up a number. I then jammed 2 such gadgets together in such a way that the resulting construction can look at a pair if vectors and offer up a single number.
And that single number turns out to be the simple arithmetic product of the numbers obtained from the vectors individually!!
Now we defined our dual vectors to be linear functionals, which simply means if I scale my input vector by 2, say, I will find my output number scaled by 2.
The number my functional offers for (u, v) is, say, xy = z. Then I will have that (2u, v) gives (2x)y = 2(xy) = 2z. Likewise, for (u, 3v) I will have x(3y) = 3(xy) = 3z
We learned this in school.
Anyway, I now want to give my bilinear map a name.
I know  I'll call it a TENSOR!! In fact it is a type (0, 2) tensor.
P.S. Gotcha Jane.
I don't where you got that from. PROVIDED ONLY that two functions have the same domain and codomain, and are defined over the same range, they can be added, subtracted, multiplied and divided.Originally Posted by Faldo_Elrith
No, you can’t – not for functions in general, anyway.Originally Posted by Guitarist
When your codomain is equipped with addition, you can define the “sum” of two functions in the usual way (by setting the image of of the sum of two functions as the sum of the images of the functions). This is the case with linear functionals, whose codomain is a field – so linear functionals can be added.
But if your codomain is just a set with no structure or operation, then it adding functions wouldn’t make any sense. Perhaps this was where Faldo_Elrith was confused. :?
Jane is right, of course. I was talking about functions into a vector space or field. Sorry for my inappropriate assertiveness.
Sorry I am trying hard not to derail the discussion by going ahead of everyone but when I have nothing else to do I find myself researching and mulling over your hints and suggestions which are all too likely provoke questions on my part.Originally Posted by Guitarist
Maybe what is bugging Faldo_Elrith is that there doesn't necessarily seem to be any inherent meaning to adding functions in general, such that this requirement we keep stating: just seems to be a DEFINITION of what we mean by adding functions rather than a requirement we have imposed on something. I mean like, what does (cos + sin) mean? We can say that it means adding the result of these functions but then what is the requrement really imposing.Originally Posted by Faldo_Elrith
I think the answer is that we imposing a requirement on the vector space that we are saying these linear functionals form. We are saying that there is a correspondence between the addition we have going on in the vector space of linear functionals and the usual sort of addition of real numbers that the linear functionals map to.
Does that help?
Yeah, but Jane had it spot on. You, Faldo and she, are right, it makes no sense to add functions UNLESS the definition makes sense, that is unless the codomain of is a space that admits of addition (plus identity) of the range elements and .Originally Posted by mitchellmckain
These spaces will, in general, be vector spaces or fields.
So, I was wrong. Sorry about that (do I shoot myself now, or will later suit you?)
OK I’m plodding slowly through all of this. So you can define addition and scalar multiplication of linear functionals and turn the set of linear funcionals into a dual space. Yes? Right? So far so good.
But arhhh, ughhh ...
Can you please explain what you mean by that? So sorry for being such a thicko.Originally Posted by Guitarist
Well I think this means that V** is the space of linear functions that operate on the members of V* mapping them to the set of reals and and that this is a vector space of linear functionals satisfying the same linearity conditions. Furthermore, I suppose he is suggesting that the natural isomorphism between V** and V allows him to use V itself as a representation of V**, so that the members of V can be thought of as representing the functionals of V** that map the linear functionals in V* to the set of reals.Originally Posted by Faldo_Elrith
What I am not completely clear on is the reason why the naturality of the isomorphism is a requirement (or desirable) for this. I suppose it has something to do with the commutivity requirement for naturality. That is if is the natural isomorphism we are talking about, then for every morphism f on V*, z commutes with f. But what doesn't make sense to me is that this seems presume that the members of V are already functors that operate on V*.
Wait UNLESS you think of this as a reverse of the fact that the members of V* operate on V. Hmmmm... What I mean is if we define for and
which means we have for and where z maps into
and that means that the commutivity condition can be written
Hmm...
so if
and
we have
and
Does that make sense?
First let me say that the isomorphisms require that our spaces be finitedimensional. A bad omission on my partI confess your parentheses had me scratching my head a bit, so I am not certain, but I don't think this can be quite right. There is a commutativity requirement, but I don't think this is itOriginally Posted by mitchellmckain
Like, I think we are getting muddled between elements of vector spaces and their properties as realvalued maps. For example, if and , then if .
How can act on a real number?
So, let's drop the variable . You want that to make sense? But, , which is not in the domain of .
Anyway, as far as I know, no such proof is possible.
So, we could go on and prove that is natural isomorphism, but then you and Faldo would still have to take it on trust that, given such a natural isomorphism, for every occurrence of I may substitute .
All I can say to that is this: it is common in mathematics to interchange mathematical objects that are naturally isomorphic. In fact, I would go so far as to say that, what we might think of as being strict identity between objects is often not attainable, and natural isomorphism is the best we can hope for.
Sorry this is such a crappy post, I feel like shit (chest infection). If I am still alive tomorrow, which seems unlikely, I will try to be more helpful
I'll give a shiny nickel to whomever can show that:
And let's keep this basicno category theory, Guitarist.
I understand what you mean  there are ambiguities in the way I have expressed things. Perhaps it would help if I write the commutivity requirement like thisOriginally Posted by Guitarist
And this shows clearly the trouble that I was having which is that is which we do not have defined as acting on V*, which was why I made the suggestion that since maps to to a real number then there is a sense in which maps to that same real number.
Thus could rewrite that last part of my post like this:
and
Now the reason this makes sense to me is that in actual practice in GR the things we are talking about in V are vectors and the things in V* are covectors and while the covectors operate on the vectors by multiplying (inner product) on the left, the vectors can operate on the covectors by multiplying on the right.
Perhaps the difficulty here is just that of finding a way to write these things in such a generalized manner that is also unambiguous, for I notice that it is commutivity diagrams that are generally used to deal with this commutivity requirement in what I have been reading when looking up natural isomorphisms. But I have not been finding those very transparent which is why I have been trying to write it in the manner that I am more used to.
LOL My undergraduate degree was math but my graduate degee is physics. I can choose to be the physicist rather than the mathematician, for whom the simple reason that it works, is all the reason I really need. LOLOriginally Posted by Guitarist
Aw, shoot, then this is going to be hard.Originally Posted by serpicojr
So, notice first that if are vector spaces, and is a linear transformation, then it is relatively easy to show, using only linear algebra, that we will have as the corresponding transformation as an element in the space of all linear transformations on their dual spaces
Now consider first the direct sum where . (Forgive me, I can't be arsed writing limits)
Here we have 2 linear maps: the inclusion , and the projection
Taking the dual of , I have that , and .
Now consider the infinite direct product with the usual projections .
So by virtue of the above, I will have induced maps such that
and
, such that
It simply remains to note that, as projections and inclusions are unique, then so I claim this is the required isomorphism
:Big blank stare: :?
Uh... does this have anything to do with the topic already being discussed?
Nothing, seeing as it is completely wrong.
Then I am a little confused. I thought that you and Faldo were unhappy with my assertion that, whereas the element acts on , such that , we may equally have that acts on such that .Originally Posted by mitchellmckain
You phrased it slightly differently, but it amounts to the same thing. I like to argue from the natural isomorphism between the space of linear functionals V** acting on V* and the space of vectors V, but we can take it as axiomatic if you like.
If you are all cool with that, we can proceed and find the type (r, 0) tensors, and the mixed tensors.
Guitarists and Mitch.........You have done an amazing effort.
I need some help, I am into Quantum Theory, NonEuclidean Geometry.....What I need is references:
Can you recommend for me the best references that introduce me to Quantum Theory and NonEuclidean Geometry. What I most want is references about (essential mathematical skills for studying Quantum Theory)
So all what I want is names.....
For Faldo, I have this book, "General Relativity Geometric Approach for Malcolm Ludvigsen" it bypass to some extent the complications of tensors, manifolds....etc.
I have also tried to study Tensors, before studying Tensors, I need to study Topology set Theory and basics of NonEuclidean Geometry, and regarding Riemann surface....Woooh. that lies at the end of the tunnel.
The only easy thing about Riemann as I know is Riemann Integral........
Thanks in advance.
Ah.. gotcha. Duh! You are quite right. My whole attempt seems pretty circular to me now. (Hmmm... WAIT! but that suggests to me, that the right approach might be one of consistency. i.e. that if we assume that we can do this, the result is only consistent or fully useful if we have this commutivity of a natural isomorphism)Originally Posted by Guitarist
Anyway my question, about why naturality justifies this, can remain unanswered, if you like.
I am certainly ok to proceed, for it is not a matter of understanding at this point but only proof, and if we demand proof of everything in life then we will not get very far.Originally Posted by Guitarist
mitch First this. I very much admire your reluctance to accept fiat assertions, and you are especially wise not to do so when they come from me.Originally Posted by mitchellmckain
But, in the present case, I believe you have to swallow one the following two statements whole: the naturality of the isomorphism between V and V**, allows us to say that, since V** is realvalued on V*, then we may as well say that V is realvalued on V*.
Or: elements in V* act from the left on elements in V, and elements in V act from the right on elements in V*.
Personally I regard this latter as something of a "shortcut", but what do I know?
Last night I actually scribbled down some sort of "proof" of the naturality of the V**  V isomorphism. I was slightly drunk at the time, but as I recall it was 8  10 lines long.
I suppose I could dig it out if you want, but, as I say, it still requires a big gulpandswallow.
Poor Faldo! He must think we have completely lost the plot. I shan't have a lot of time till Monday, but I'll cobble something together if I can
Let me see if I get it. (And I can do special symbols now as well, wow!)
Take two linear functionals and in the dual space . (Recall: a linear functional is a map from to .) Then define by putting . This is called a type (0,2) tensor.
Good, good. What's next?
Faldo: Your conclusion is correct  good  but your line of reasoning isn't quite right. We could let it go, I suppose, but I tend to be a bit boring about these things. So let's seeMaybe you misspoke, but a linear functional is an element in the spaceOriginally Posted by Faldo_ElrithYou might have been better to note that, elements in the space are of the form , so you should perhaps writeThen define by putting .
Then define by for all
But, nice try, especially as this is new to you.What's next, when I have time, will be elements in the space These will be our type (2, 0) tensors.What's next?
I made a mistake. A linear functional is a map from to , not to . I also made a slight mistake in defining the bilinear map .
Anyway I'm ready for type(0,2) tensors now.
OK. Recall first that we easily found some linear functional that acts on some vector such that is a real number. We may take this as definition.
Recall also we showed that is an element in the vector space , and is an element in the dual vector space , by virtue of which we wrote
And now recall we defined the "tensor product" by for all .
So, we can now take a different point of view: it is perfectively permissible to regard as a linear functional that acts on some (this is due, if you followed my discussion on this subject with mitch, to the natural isomorphism between V and V**)
Then we will have , and also the analogous tensor product .
Why then, we will call the beast a type (2, 0) tensor.
So, a couple things should be reasonably obvious; just as I can define as elements in a type (0, 2) tensor space , so then I can define the space of type (0, 3) tensors as , up to type (0, n) tensor spaces.
Likewise for the type (n, 0) tensor spaces.
So, to bring all this in to line with what you may have read about tensors, we have to do a bit of notational housekeeping.
Later
I think everyone has read this by now and apparently have no questions but are waiting for you to continue.
Yeah, sorry. I have been sooooo ill recently. Let's see.
We have covered type (n, 0) and type (0, n) tensors. Let just quickly say that, if the constructions and both make sense, then we can agree that the construction
makes sense too.
The elements are referred to as "mixed" tensors  in the present example they are of type (1, 1), but in general will be of type (m, n) (i.e. m copies of V*, n copies is V).
Which bit of notation allows me to deduce that the space is the space of type (1,0) tensors called "vectors", and the space is the space of type (0, 1) tensors called "covectors".
From the above it makes sense to define the space of type (0, 0) tensors called "scalars".
Let's now see how this all fits together. For this, I am afraid, we shall need a bit of classical vector space theory. So let be vectors, and let be a spanning set of basis vector for .
Then classical theory tells us that for scalars .
First thing we're going to do is suppress the summation sign, and say that, whenever the same index appears in both upper an lower forms in the summand, we may assume summation over these indices is implied: thus etc.
Let us now assume that we are consistently working on a fixed basis. Then the vectors differ only in in their scalar coefficients on this basis, and we might as well write etc.
But, since the scalars are arbitrary, all this is telling me is that any vector can be expressed as the sum of the ith scalars on the ith basis vectors.
Accordingly, I may well write , Similarly for our covectors.
Simply note that, by convention, tensors of any type are written in upper case notation, and I will have that is an arbitrary type (1, 0) tensor, is an arbitrary type (0.1) tensor, is an arbitrary type (1, 1) tensor, and so on.
Guitarist and Mitchell, thank you both very much for your help. I've slowly read through the posts and I think I have a fair idea of the different types of tensors that can be constructed. The notation is looking very much like what my book is using. Some more of this and I may even be able to figure out what this thing means:
It's called the LeviCivita connection.
Faldo: Well, let's walk before we try to run, OK? Suffice it to say that connections are deep at the heart on manifold theory, and that, in general, they are not tensors. Leave it for now, we have a few more tottering attempts to walk yet.
We have enough information at our disposal to discuss the algebra of tensors, which, on a superficial level at least, looks pretty easy.
Don't be fooled though (and you won't, if you have been following closely), there is hidden depth in the apparent superficiality.
To make life a little easier let's define the "rank" of a tensor as the total number of indices it carries. Here's classification:
Rank 0:
tensors of type (0,0)  scalars. One writes
Rank 1:
tensors of type (1, 0)  vectors , and tensors of type (0. 1)  covectors
Rank 2:
tensors of type (2, 0) , type (0, 2) and tyoe (1, 1)
......you get the picture.
Now tensors of any rank are elements in a vector space. Addition, subtraction and scalar multiplication are therefore defined, subject only to this constraint.
Addition of a rank n tensor to a rank m tensor is only defined when and when all tensors in the sum are of the same type (this is because we usually think of tensors as being matrices). The result is a rank 1, type (1, 0) tensor, for example, whereas is undefined.
So, all this should be familiar enough. Let's now define "multiplication" of tensors. On the face of it, it looks easy enough: the product of any rank m tensor by any rank n tensor is a rank m + n tensor, regardless of type.
Let's see a couple of examples. and .
Easy enough, until we realize that arithmetic multiplication is not defined for vectors.
I'm out of puff for now, but I invite you to ponder this  what exaclty is meant by the product of 2 or more tensors? (we have all the tools at our disposal from earlier posts  think on it)
Well if a tensor can be considered a linear functional on a tensor space. Then the product of two tensors will be a linear functional on the tensor product of the two tensor spaces that these two tensors are linear functionals on.Originally Posted by Guitarist
I am thinking that is not all, like there must be some properties that this linear functional satisfies as well.
After attempting to read through this thread I feel like more of an idiot than ever, lol.
Of course, everyone who doesn't speak swahili is an idiot.Originally Posted by Cold Fusion
Wait a minute. I don't know a single word in swahili. Guess I'm an idiot.
What is it about math and science that is so intimidating that people who do not learn the language think they must be an idiot if they don't know it?
All languages are hard to learn. It takes a great deal of work. So there is always a test of will. How much do you really want to learn it? Sure innate ability plays a role. So I am never going to be an Einstein. That's life. But as far as learning stuff is concerned most of it is still a question of how much is it worth to you?
« Why 3 spatial dimensions?  A new definition of mass » 