1. The forms I want to have fun with are called differential forms. As well as being pretty neat in their own right, my pals tell me they are finding increasing application in Physics, especially field theory. I will try to show how they can be used to write Maxwell's field equations in a way that is even more succinct than the Heavyside vector versions we learn in school.

First a disclaimer; I intend to cover no more ground than we will strictly need to to track down our quarry, so there will be a lot left unsaid. But of course, if anyone wants to chip in with extra detail, that would be really cool. Plus, as this subject is relatively new to me, there may be some absolute howlers here, so please correct me if I get it wrong.

So, let us start with an arbitrary n-dimensional vector space over the field which, for economies sake I will write as . We now define another vector space which we will write as , and call this the space of p-forms. For now, we will just take this notation to be a representation of the following instructions:

from the set of basis vectors for select at a time and call this another vector space (with the usual axioms). A coupla things should be immediately obvious:

, since we are just selecting each of the basis vectors of , one at a time. Also is
either meaningless (select no basis vectors!), or it is something like a set of scalars. In fact it is so defined.

A word of caution; here, as often elsewhere, no real distinction is made between scalars proper and scalar-valued functions. Thus we may have . Also obvious should be that we must insist that .

An obvious question then is what is the dimension of the space . Consider first . Elementary combinatorics tells us that, when there are choices, and we will write the dimension of this space as (say "from 3 choose 2").
Now quite obviously, this doesn't generalize, so let's write the general form as - a quick calculation shows the equivalence when

So the elements in are called p-forms, hence the elements in are 1-forms. I haven't said why these are called differential forms, but this is already highly suggestive; I will leave like that for now, and see if anyone takes the bait!.

2.

3. Oh how I miss your teaching threads, Guitarist... I will be following this one closely (perhaps silently), despite my lack of skill in proofs and such. I hope it finds dedicated posters who know what they're doing and I hope it really goes somewhere for you.

4. Why, thank you Chemboy! Say what - if I promise not to set any exercises, would you be willing to break your vow of silence?

Anyway, let's see. Our p-forms come equipped with a unique operation called the exterior (or wedge) product which is defined as follows:

where I do not insist that .

So that, if say, are 1-forms, then is a 2-form.

This operation on 1-forms has the property that this is an edit; notice this mandates that . This doesn't quite generalize:

First, from the foregoing, it should be obvious that any p-form can be "built" by the wedge product of p 1-forms. If p is even, one says that our p-form is of even degree, of odd degree otherwise. So we take as a definition that when are odd, then for that . If is even, then , and if just one of these guys is of even degree, then this last equality holds.

So for any one writes (this follows from the elementary arithmetic fact that the product of odd numbers is odd, the product of an even number with any other number, odd or even, is even and and so on.

So finally we understand our notation; the existence of products implies the existence of powers, so is the p-th exterior power on

PS Um, I'm not sure that last bit was worded perfectly. I'm short of time just now - anyone care to check it out for me?

5. Yes, I'd love to participate in this thread. I'm working on improving my skills so I can get to problems and proofs eventually.

Originally Posted by Guitarist
This operation on 1-forms has the property that ; notice this mandates that .
So is that to say that the exterior/wedge product is "anticommutative"? And might you mean ?

I find the way the dimension is determined to be pretty cool... Each dimension is one possible combination of basis vectors from , right?

Our metric tensor from manifolds is a 2-form, is it not? I'm thinking it says it in that thread, but I haven't looked at it in a long time...

I think I'm getting a little mixed up on this point... is a 1-form... Let me know if I need to drop the differential topology because I'm thinking of 1-forms as covectors...which seems to be contrary to the fact that they're formed from the basis vectors of and not ... Clarification...?

6. This is probably just nit-picking, but I wouldn't call the elements of the set p-forms. Really, given your definition here, is simply the p-th exterior power of the vectorspace V.

As a previous poster suggested, the word "forms" usually implies that we are dealing with the exterior powers of the dual of V. Actually, a differential form is a bit more complicated than that. To avoid mentioning manifolds, they can be thought of as smooth functions from V into , where V* is the dual of V.

7. Yes, yes and yes. You are both correct; a typo was spotted, well caught. Forms of odd degree anticommute, all others commute.

Second, strictly speaking, the objects I have been talking about so far should really be called p-vectors, though I will make them into differential forms directly, and I didn't want to confuse by switching terminology

Third, I was leaving the "true" status of as a mini-punchline.

So, let's do it!

Let be a function in 3 variables on an open subset of . Further let be continuously parametized by some . Then the chain rule from calculus gives me .

Then, independent of , with a flourishing hand-wave, I "cancel" the to get which our calculus text calls the differential] of the function . (Don't worry, it's just for illustration! Grand folk call it "an heuristic"). We will call this a differential 1-form, or simply a 1-form. This construction tells us at least three things.

Notice first that is a 0-form, by definition. So it seems we have an operator .
This is called the exterior derivative and generalizes as .

Second, since is a vector, and further since is a scalar, we take the above to be of the general form i.e the are basis vectors.

We instantly recognize this as a covector, i.e. an element in the cotangent space on some manifold.

Well, well, this is nice. We know that is trivially a manifold, and that any (co)tangent space is either all of or some open subset thereof (if I may say so, an illustration of what we discussed in the "New to Math" thread)

And finally, we look again in our calculus text and find that that the gradient of a function is given by , where the i, j, k are unit (i.e. orthonormal basis) vectors, thus we conclude that the exterior derivative , acting on a 0-form, is nothing more (or less) than the gradient operator.

How very nice, don't you think? It gets better!

8. I would be adverse to calling df the gradient of f (although I've seen this claim made before in a physics textbook).

The gradient of f is usually understood to be a vectorfield, whereas df is a 1-form field. The relationship between them is that

where g is a metric on the tangent space.

So the gradient depends on the choice of a Riemannian geometry--it is not intrinsic to the manifold the way df is intrinsic.

Again, this is not just a matter of nit-picking--it's important to make these distinctions. For example, consider polar coordinates: The differential of f is

However, the gradient of f, with respect to polar coordinates and the polar coordinate ON basis vectors, is different.

9. Yeah, salsa, you're right, I expressed myself more forcefully than I should have.

Given Cartesian coordinates and the Euclidean metric on there is an isomorphism . Indeed, they are duals. It is common to "equate" isomorphic spaces, but since, in the present case, they are dual, this is perhaps unwise.

Thanks for pointing that out, and I apologize for any confusion.

Let's now look at the exterior derivative of a 1-form.

So, suppose, as before, that are functions in 3 variables, again on with Cartesian coordinates and the Euclidean metric. Then

is a 1-form. Applying the exterior derivative one has that

, a 2-form, which from our previous result is just

, still a 2-form.

Gathering terms, and remembering anti-commutativity we have that this is just

.

This is an isomorphism for the exterior derivative on a 1-form

PS by edit: I am not totally satisfied with the above derivation, though I know the conclusion should be correct. I doubt if anyone will have the patience to work it through, but just maybe........

Anyway, I have to jet off to Paris tomorrow morning, so I'll see you all in a few days

10. I'm afraid you lost me at ... why do we have three functions?

11. Well the trite answer I suppose is that is what I chose. Maybe this might help......

there is a one-to-one correspondence between 1-forms and fields; look in any intermediate text and we will find a field in (Euclidean metric, Cartesian coordinates) defined by where the are scalar valued functions and the are unit vectors.

Thus the correspondence

Now the curl of a vector field (roughly how it swirls round a point) is a mapping Vector field to Vector field, so I started with the analogue of a vector field in which I called where again the are scalar valued and the are basis 1-forms.

I then tried to convince you that there is an alternative way of writing the curl in terms of differential forms.

12. Well, ready or not, here he comes.........

Recall we had the space of p-vectors which we informally defined as "given n basis vectors choose p". Let;s call this "choice space".

Now let's call as "remainder space", so that, by way of illustration, if and p =1, then my choice of the 1-vector in remainder space.

The Hodge (or star) operator is a mapping from choice space to remainder space: as in my example. One calls these Hodge duals.

There are some really neat tricks we can do with this when put together with the exterior derivative .

13. ok, I get it now. Think I'm set to continue...

14. Re-reading thos thread I realize there is something I have been a little vague about, and something I have omitted. Remind me to come back to them, somebody.

But for now, let's see the magic that arises when we use the exterior derivative and the Hodge operator together; we're going to use the Laplacian as our example.

The Laplacian is a second-order differential operator that maps scalar fields onto scalar fields and vector fields onto vector fields. It is defined as the divergence of the gradient of a field - . That is

I don't want to say much more about what it actually does but suffice it to say that it crops up time and again in the physical sciences; one of the most famliiar, perhaps, in Schroedinger's wave equation.

In terms of notation, the modern trend is to use ; some people use , but the traditional notation is , which I use, though care is needed in interpreting the exponent.

So - we know about the exterior derivative, and the Hodge operator.

Working in (Euclidean metric), if , then , a 1-form.

Then , a (3 - 1= 2)-form. (Notice that, in each term, one way or another, we have the full complement of basis 1-forms)

Then applying "d" again (which, as we know, is equivalent to multiplying through by ) I find

, a (2 + 1 = 3)-form.

Since we are in 3-space, a final application of Hodge brings us back to a n - p = 3 - 3 =0-form, whereby .

This the Laplacian.

Now how sweet is that!! Don't you just love it?

15. What makes this whole approach useful is the realization that all these operations can be defined in arbitrary coordinates. So to add to what you're saying, do the other people in the audience understand why the differential d is the same in all coordinate systems?

The other thing to emphasize is that the Hodge-* operator can be defined in a coordinate-invariant way provided that one defines a metric on the tangent bundle.

16. Originally Posted by salsaonline
What makes this whole approach useful is the realization that all these operations can be defined in arbitrary coordinates. So to add to what you're saying, do the other people in the audience understand why the differential d is the same in all coordinate systems?
Even assuming I have "an audience", which looks unlikely, probably not, as I failed to axiomatize the exterior derivative, which I now do:

For any p-form and any q-form , where I do not insist that :

a)

b)

c) for all functions and any vector field that

d) for any p-form that

For illustration, let's look briefly at (c). First note that, if is a vector in , it is also an element in some vector field on .

Second note that vectors in are defined by , where the are scalar and the are coordinate functions at

Now since then .

Since this applies to any vector at any point in , we will have that, for the field that .

In other words, the exterior derivative operator is, by our simple axioms, independent of any choice of coordinate system.

Axiom (d), which is usually referred as the Lemma of Poincaré, has the nice consequence that curl(grad) = div(curl) = 0.

Anyway, I'm losing heart here, ho hum.....

17. Hey, sorry I haven't been with you for a few days... I wish I had taken a closer look at your last post earlier, your derivation of the Laplacian is really cool.

Unfortunately I'm not really ok with axioms b-d... :-\

Seems to me that b. might be ... possible typo or am I just not getting it?

For c, is just the first exterior derivative acting on the vector field, right? But I'm not sure of what really represents...

As for d... I can certainly just accept the fact, and maybe I should, but I don't understand why it's true.

Sorry to be a pain. But I won't lead you on and pretend I understand it, and I'm sure that's preferred.

18. Originally Posted by Chemboy
Hey, sorry I haven't been with you for a few days... I wish I had taken a closer look at your last post earlier, your derivation of the Laplacian is really cool.
Yes it is, though I claim no credit for it.

And.... knowing you are chemist, and while our dinner is cooking, I dusted off the dinosaur excrement from my college notes, where, to my slight surprise, I find I once knew how to derive the Schroedinger equation. Maybe I will offer it it in "your" sub......

Seems to me that b. might be ... possible typo or am I just not getting it?
Good well caught - it was a typo, but not of the sort you suspected. I meant . This is called a derivation]

Look - I know you are busy doing real analysis, a subject for which my distaste is matched only by my lack of proficiency in it. So, just because we are friends, please don't feel the need to keep up with my rubbish threads.

19. Yeah, well the analysis is very slow-going and rather discouraging and frustrating. I'm only doing it because I know I should get it down if I want to go into other, much more interesting things (I definitely share your distaste for it). This is much more my kind of thing.

It would be interesting to see the derivation of the Schroedinger equation if you'd like to offer us that.

 Bookmarks
##### Bookmarks
 Posting Permissions
 You may not post new threads You may not post replies You may not post attachments You may not edit your posts   BB code is On Smilies are On [IMG] code is On [VIDEO] code is On HTML code is Off Trackbacks are Off Pingbacks are Off Refbacks are On Terms of Use Agreement