The Lie groups describe continuous symmetries.

First notice well the word "describe", and now let's have a simple example.

It is intuitively obvious that the 2-sphere is symmetrical under any and all rotations. What is slightly less obvious that the mapping that takes a point to the point on the 2-sphere is continuous. For now let's assume this is the case, and ask if there exists a Lie group that describes this symmetry.

Simple answer - yes! But we have some way to go before this will become clear. But note this well: if we think of our 2-sphere as being some kind of "physical object" (whatever that means), then this a special and not very interesting case. Mathematicians and more especially physicists are interested in the symmetries of equations and theories, and I intended to give examples in due course.

But for now we have some work to do. Suppose first that be vector spaces, anddefinethe set of all linear transformations by . Then the set of all transformations becomes , and no serious ambiguity results if I write this as .

Now suppose I insist that every element (a linear transformation, recall) in has an inverse. By this stipulation the identity transformation comes free, and we recover the structure of a GROUP.

To celebrate, let's write this as and call it the General Linear Group. Notice that thus far I have not specified the dimension of nor the field over which is defined. We'll do that in a mo.....

But first I need to show that this is a MATRIX group. I confess I had to think long and hard how best to explain this, so if what follows doesn't quite float your goat, I apologize.

Well the easy way out is to wave my hands and say it is an elementary fact of operator theory (oops - I use the terms operator and transformation interchangeably -I am allowed!) that any linear operator can be written as a matrix. But that is a cop-out.

So first some boring stuff. Almost any vector can be written - expanded - as where the are scalars chosen from whatever field we are working over, and the set are called basis vectors. I say "almost" because, in order to be an element in the set we require that this sum has trivial multiplicative coefficients - that is one or zero according to whether or not. One calls this "linear independence of the basis vectors"

First let us assume that our vector space is defined over , the real numbers. Second let us assume this vector space has dimension

So suppose the linear transformation and further suppose we allow this to act on basis vectors (as we must) .

Notice first the trivial fact that for no very good reason that conventionally one does not parenthesize the argument of an operator. More importantly notice that if are basis vectors then the cannot possibly be.

So we have that, for the non-basis vectors there must be non-trivial real coefficients on the basis set i.e in all so the following applies:

If I say that the index denotes the the vector that I wish to expand, and the index denotes the element in the basis upon which it is being expanded, I am free to write

and say that the superscript index as it roams over references the columns in some real-valued matrix, and the subscript index likewise references the rows in the same matrix.

That is

You think that is all? Dream on.......