# Thread: Problem writing a formula with matrices

1. Hi all;

I have a strange problem: I know exactly what I want multiplied and added, but I'm not sure how to write it as a formula.

I have a vector transformation which is expressed as the sum of two products:

a) a product of the original vector (let's call it ) by a scalar, say b) a product of the original vector by a square matrix, say So the whole transformation is The scalar is a scalar product of two other things, one of which is a row vector, the other a column vector: Now I would like to express the entire transformation as a matrix product: and I am in trouble writing the formula for , because if I write then I am adding a scalar to a matrix ( ), which seems incorrect to me.

On the other hand, if I write , where is the identity matrix (ones on the diagonal, zeros elsewhere) of suitable size, it looks weird and redundant, if not silly, given that matrix multiplication is associative, and .

Any ideas how I can write the formula for A?  2.

3. Hi Leszek. I am sitting in a campsite in France, with no access to texts, so take all that follows with a grain of salt. Or wait for someone else to correct me!

First you are overloading yourself and us with unnecessary information. For example it doesn't matter at all whether your scalar is an inner product - all that matters is that, if your vector is in over the field , then, by the definition of a vector space, .

Now you need to specify a codomain for your operator . Let's say that .

Now you have 2 equivalent choices (I think).

First take the Cartesian product whose elements are of the form . (strictly and pedantically speaking, I think this should be thought of as an operation on the underlying sets of these spaces).

Then the direct product is defined by whenever  .

A possible, more fancy way to do roughly the same thing is to define the "forgetful functor" where the multiplicative operation on the field is discarded, and then introduce the auxiliary field such that is now a vector space over itself, essentially.

Now if the operator is an element in the vector space of linear operators (it is), then for you may form the same construction, i.e. .

I hope this is correct, and if so it helps. No doubt I will corrected if not.

As you can see, I have too much time on my hands!  4. Guitarist, I am impressed by your prowess in abstract algebra, but can someone please suggest a simpler solution?

I just want to multiply vectors by a scalar (product of two vectors) and by a square matrix, then take a sum of the two products. Factoring the vector out of the sum and representing the whole business as one product looks as simple as the we all did in high school, at the age of about 15, if not much earlier. And yet, perplexingly, I cannot seem to correctly add the two multipliers because they are of a different nature, even though each of them transforms the source vector into another vector of the same dimensionality.

If the scalar were just a scalar, I might replace it with , which is a square matrix and adds neatly with . But here is where the inner product stuff kicks in and beats me - when I substitute for , the term for becomes ambiguous and I cannot make sure if represents a matrix: or a scalar: . Conectivity seems to fail in a bizarre way: depending on the order in which I perform the two multiplications, I either get a scalar or a diagonal-matrix version of the same coefficient .

What am I doing wrong?  5. OK, I'll put it differently:

in multiplying , I mean the row vector multiplied by the column vector yields a scalar , which is then multiplied by column vector . This means I have two different kinds of multiplication:

- the inner product of two vectors, yielding a scalar;

- the product of a scalar and a vector, yielding a vector of the same size as the input vector.

All of this is just part of a larger formula, which also includes the multiplication of a matrix by a vector.

Now my question:

Is there a more or less generally accepted and coherent notation system that would allow me to differentiate all these different "multiplications" from each other? I know the inner product is often represented by the centered dot, but what about the product of a scalar by a vector? As for multiplying a vector by a matrix, I somehow feel it should be expressed by simply writing the symbols of the two multiplicands next to each other, as in , but I'm open to other suggestions.

If I keep representing all of these multiplications by juxtaposing the symbols, as in , with no dots or asterixes or "x"-shaped multiplication signs between them, I get the kind of confusion I described in my previous post.

Yes, I know the "x"-shaped sign usually means the cross product, so it wouldn't be a good idea to use it for anything else.  6. Originally Posted by Leszek Luchowski
Hi all;

I have a strange problem: I know exactly what I want multiplied and added, but I'm not sure how to write it as a formula.

I have a vector transformation which is expressed as the sum of two products:

a) a product of the original vector (let's call it ) by a scalar, say b) a product of the original vector by a square matrix, say So the whole transformation is The scalar is a scalar product of two other things, one of which is a row vector, the other a column vector: Now I would like to express the entire transformation as a matrix product: and I am in trouble writing the formula for , because if I write then I am adding a scalar to a matrix ( ), which seems incorrect to me.

On the other hand, if I write , where is the identity matrix (ones on the diagonal, zeros elsewhere) of suitable size, it looks weird and redundant, if not silly, given that matrix multiplication is associative, and .

Any ideas how I can write the formula for A? Can be written as Where is the identity matrix.

Your problem is really as simple as noting that multiplying a vector by a scalar is the same as multiplying by the matrix . Note that the fact that the scalar arises from a product of a row vector and column vector does not change anything here.  7. Originally Posted by Leszek Luchowski

If I keep representing all of these multiplications by juxtaposing the symbols, as in , with no dots or asterixes or "x"-shaped multiplication signs between them, I get the kind of confusion I described in my previous post.

Yes, I know the "x"-shaped sign usually means the cross product, so it wouldn't be a good idea to use it for anything else.
Juxtaposing the symbols is the standard notation for all of the various multiplications with which you are dealing. You don't really need anything else,and you are correct in assuming that using the "x" would create confusion within the minds of those who are acquainted with the vector cross-product in dimension 3.

The usual resolution is that one simply has to keep track of the nature of the terms in the product and from that realize what is meant -- in matrix notation it boils down to merely verifying that the dimensions of the matrices are such as to make the matrix products meaningful. Note that a dot product is just the matrix product of a row vector with a column vector and hence fits into this scheme.  8. Originally Posted by DrRocket
The usual resolution is that one simply has to keep track of the nature of the terms in the product and from that realize what is meant -- in matrix notation it boils down to merely verifying that the dimensions of the matrices are such as to make the matrix products meaningful. Note that a dot product is just the matrix product of a row vector with a column vector and hence fits into this scheme.
That is the way I have been thinking about it and experimenting under Scilab (a free counterpart of Matlab), and it works beautifully.

But when I try to write it down in a formal way, I am worried that my "multiplication" (in fact, a whole bunch of different multiplications) is not connective: means obtain a scalar first (the dot product), then multiply it by the column vector X,

which is what I mean, but is a nono, as two column vectors cannot be multiplied.

I think I will just write explicitly.  9. Originally Posted by Leszek Luchowski Originally Posted by DrRocket
The usual resolution is that one simply has to keep track of the nature of the terms in the product and from that realize what is meant -- in matrix notation it boils down to merely verifying that the dimensions of the matrices are such as to make the matrix products meaningful. Note that a dot product is just the matrix product of a row vector with a column vector and hence fits into this scheme.
That is the way I have been thinking about it and experimenting under Scilab (a free counterpart of Matlab), and it works beautifully.

But when I try to write it down in a formal way, I am worried that my "multiplication" (in fact, a whole bunch of different multiplications) is not connective: means obtain a scalar first (the dot product), then multiply it by the column vector X,

which is what I mean, but is a nono, as two column vectors cannot be multiplied.

I think I will just write explicitly.
That is precisely what you need to do. As you noted, matrix multiplication is not automatically associative -- in cases like this where the row and column dimensions cause some associations to not make sense.  Bookmarks
 Posting Permissions
 You may not post new threads You may not post replies You may not post attachments You may not edit your posts   BB code is On Smilies are On [IMG] code is On [VIDEO] code is On HTML code is Off Trackbacks are Off Pingbacks are Off Refbacks are On Terms of Use Agreement