This is an excellent idea.
Markus,
Is there any way to delete cranks posts? This way the thread would be kept clean, devoid of their "contributions".

Again, a simple (simplistic) way to think about it is to refer to 3D space:
the product between a tensor (represented by a 3x3 matrix) and a vector is a vector
the further product between the vector resulting from the above operation and another vector is a scalar (think of the dot product of two vectors in 3D)
At each step, you can see how the rank is reduced :
from rank2 tensor (matrix) to rank1 tensor (vector)
from rank1 tensor (vector) to rank0 tensor (the scalar resulting from the dot product of the two vectors)
Sorry Markus, I cannot stop myself.
First this, though..... The fact I don't like your characterization of the trace of a tensor you can put down to my notational fastidiousness.
But this does not seem quite rightNote that the scalars you use are caled the "components" of your tensor. Now in the (ghastly) notation you learn from your physics texts you will find any tensor written ONLY in terms of its components  so that refers to the components  the scalar coefficients  on a set of basis vectors. That is the above should be written as
So your assertion seems not to be quite correct  if you insist on referring tensors (in this case vectors) by their components, then you have indices inconsistent!
Ok, you may say notation is arbitrary, but we are talking about some sort of tangent space to a differentiable manifold, rightNow, it is important to reiterate that this is a local decomposition, so with a BIG wave of the hand, and totally without rigorous proof, I now state that the set of basis vectors can be written as
I assume by your notation you intend the vector , what is rather outdatedly called a "contravariant vector". But the basis for the space of all such vectors at each point is referred to as the directional derivatives at that point  differential operators to be exact  as the set whereby any may be written in your notation as assuming implied summation
The set is in fact basis for the "companion" space , the space of all linear functions , whose elements  cotangent vectors  are, by your component notation, written
Not at all, Guitarist. All constructive contributions are more than welcome ! I am making all of this up as I go from my own understanding of things, so I do depend on other readers to point out errors to me.
Noted, agreed, and changed in my post.That is the above should be written as
Basically yes. I am trying to justify why, in the definition of the squared line element , the "dx" terms can be considered vectors. Many readers not well versed in maths simply will not be able to realize this; it was a major stumbling block for me when I first started to learn about GR. It is all good and well to think of tensors as "functions" which have vectors and covectors as input, but if it is not clear why "dx" terms can be regarded as such, then the aforementioned expression for the line element makes little to no sense.Ok, you may say notation is arbitrary, but we are talking about some sort of tangent space to a differentiable manifold, right
I have amended the passage in my post like so  I hope this makes more sense :
How would you, from a mathematician's point of view, have explained that ? The line element, once understood, is actually a very intuitive concept, but I find that trying to explain it to laypeople without getting lost in mathematical abstractions is really quite difficult. Personally, I simply think of the "dx" terms as infinitesimal vectors, and the whole expression then makes sense to me.
P.S. I might need a little input on the next topic, connections and covariant derivative. It is very difficult to explain this to nonmathematicians, and I am not clear myself on some of the details.
Last edited by Markus Hanke; May 6th, 2013 at 02:36 AM.
I love relativity and find it most interesting topic in Physics.
I am agree with it except two interpretations which I think logically false
logically true true facts are
1. At a particular point of time every particle at same time share the same "present" in space
2. No particle can move into past even if it travels at speed of light
I believe both are logically true
A simplistic way of understanding covariant derivatives is to remember what we know from calculus about total derivatives. The connection coefficients can be understood like the coefficients in the "chain rule" of partial derivatives. Again, this is a very simplistic way, introduced just to help understand the math.
No. The coordinates are the labels which we give the axis in our coordinate system; for example, in the case of Cartesian coordinates that would be {x,y,z} in 3 dimensions. The basis vectors would be vectors which lie along the coordinate axis; for simplicity we make them, say, one unit long. For the above mentioned Cartesian system the basis vector would then be
Any vector can then be expressed as a sum of the basis vectors multiplied by a constant, in the manner described in the post.
Oh dear! What confusion there is here!
I blame the evil notation that Markus has chosen to use  on the one hand we have that is a tensor, on the other hand it refers to scalar coefficients on basis vectors.
I do NOT blame Markus for this, it is standard in physics texts. The fact I don't like it is immaterial: the fact that it leads to confusion is undeniable based on the evidence.
I cannot see how to recover this thread from here  maybe I should start a companion thread on tensor analysis on manifolds. Dunno, as I have covered all that stuff on this site more than once.
Anyhoo, just for the record xyzt: do not confuse the total derivative with the absolute differential, which before Einstein came up with principle of general covariance was the name given to what is now usually called the covariant derivative.
The chain rule CAN be used to extract the covariant derivative, but it is not the only way. On the other hand the chain rule is integral to the definition of the total derivative. I repeat, they are not the same
That's a relief
I am not making all this up, this is really how it all appears in physics textbooks. But in fairness now, I would not be able to derive everying in a mathematically rigorous, and notationally acurate way. Attempting to do so would lead to a textbook of several hundred pages, which is beyond the scope of this forum ( and I don't have the maths knowledge anyway ). The main idea here is merely to get across the basic ideas of the "building blocks" of GR. It is not an easy task to condense this down enough to be presented on a thread like this, so I am trying my best.
If anyone spots any really serious errors, both in notation and understanding, please do point them out to me. That is crucial !
Yes, I can. In fact it was on my agenda anyway, since that is what we derive the dynamics of collapsing stars from. I will dedicate a separate post to it, but I won't show the explicit derivation of the interior SM from the field equations since it is very lengthy and tedious; I'll just give the initial conditions and present the final solution.
Anything specific you are after ??
My apologies to everyone who might be following this thread; I have been too busy lately to keep working on this. I still have much to present, so I will come back to it as time permits.
« An Explaination Needed: Charges, Fields, Potentials etc.  Math / Physics / Science Radio Station? » 