# Thread: Coefficient matrices over a set or field. :?

1. hi there,

I have been looking at matrices, and so far I understand that if we have a system of linear equations, then a matrix is used to present the coefficients of the system in a 2-dimensional grid format.

I also understand that the 'solution' to a system of linear equations, will be in the form of a coordinate (or set of coordinates) at the point(s) where the lines/planes cross each other... Basically solving simultaneous equations. (though the method of solving a system using matrices comes further on in the text I'm reading; Gaussian elimination I think its called?)

What I don't understand is when it says something like:

"Solve the system of linear equations over Z_p"

I have briefly looked at "set theory" and I believe that Z_p is supposed to represent a certain "set" of numbers. I also gather that in this case the coefficients of the system must be elements (numbers) within this set and nothing else.

So basically the above quotation can be thought of as: "find the coordinate(s) of any points of intersection of the lines and planes involved when the coefficients of their equations are numbers belonging to the set, Z-p". Would that be accurate to say?

2 more questions I have following that are:

1.) In terms of application, why exactly would we need to say that the coefficients of the system must belong to a particular set and nothing else? I mean why can't we just use any number?

2.) Amongst the jargon of all the texts I've read on this subject, there is always the mention of a 'field' as well. This term seems to be used interchangeably (or very closely linked) with the term 'set', so what exactly is the difference (if any) between the two?

I hope someone here can answer these questions,
thanks,
bit4bit

2.

3. What book are you using? If it's an introductory text (on the level of Hoffman and Kunze), I imagine it should have an appendix containing the definitions of "Z_p" and "field", if not a chapter discussing such things. If it doesn't contain, it probably assumes you have more of a background in abstract algebra, and you should think about switching to another book (e.g., Hoffman and Kunze).

So let's see what I can do here... A field is a set F two binary operations on it, which we call addition ("+") and multiplication ("*"). By "binary operation", I mean that I can take two elements a and b of F and produce a third element of F by feeding them to these operations, and we denote the resulting elements as "a+b" and "a*b". Now we want these operations to behave like we expect them to, so we posit that...

1. There exists an element 0 such that, for all a in F, a+0 = 0+a = a. 0 is the "additive identity".
2. For each a in F, there exists an element b such that a+b = b+a = 0. b is the "additive inverse" of a, and we denote it as b = -a.
3. Addition is associative: (a+b)+c = a+(b+c) for all a, b, and c in F.
4. Addition is commutative: a+b = b+a for all a and b in F.
5. There exists an element 1 (different from 0) such that, for all a in F, a*1 = 1*a = a. 1 is the "multiplicative identity".
6. For each a in F not equal to 0, there is an element b such that a*b = b*a = 1. b is the "multiplicative inverse" of a, and we denote it as b = a^(-1).
7. Multiplication is associative: (a*b)*c = a*(b*c).
8. Multiplication is commutative: a*b = b*a.
9. Multiplication distributes over addition: a*(b+c) = (a*b) + (a*c).

You're already familiar with at least two fields (the rational numbers and the real numbers) and possibly a third (the complex numbers). But there are many more fields out there, and many of them are quite interesting. A lot of them ARE subsets of the reals or the complexes, but there are many more which cannot be thought of as real numbers. In any case, we can do linear algebra over any field (in the sense that we can define vector spaces over an arbitrary field and linear maps between vector spaces; or, more concretely, we can study simultaneous equations over a field and use matrices with entries from that field to solve these equations). In fact, almost everything you know about linear algebra holds for all fields.

Now what is "Z_p"? It is the field with p elements, where p is a prime number. This is kind of weird, as all of the fields you're familiar with have infinitely many elements. It's a little tricky to talk about Z_p without some "ring theory" or "congruences", so let me try this:

Z_p is the set of integers from 0 to p-1 with a special addition and multiplication defined on it. Let's call these operations [+] and[*] so as to not confused them with the ordinary + and * on integers. So...

1. Given a and b in Z_p, a[+]b (Z_p addition) is defined to be the remainder r of a+b (regular addition of integers) when you divide it by p.
2. Given a and b in Z_p, a[*]b (Z_p multiplication) is defined to be the remainder of a*b (regular multiplication of integers) when you divide it by p.

I think you should be able to prove that [+] satisfies the right properties pretty easily (what is the additive inverse of a number a in Z_p?), but you may have trouble with showing that multiplicative inverses exist, so you may just have to believe me that this works out.

4. Yeah, this is a lot to digest. I'd again suggest finding a book that covers fields in the context of introductory linear algebra.

5. hi, thanks for the reply, this is indeed alot to digest - I think I've got indigestion already!

The text I'm reading is this pdf on linear algebra. As you can see I am just at the beginning of the 'book', and the questions I am asking are referring to pages 3 to 5. Unfortunately there isn't an appendix in it.

You can see that the book mentions the "field axioms", as you did, but I was unsure as to how these actually fitted into the grand scheme of it all.

So if I understand this right...

> A 'field' is a simply a set whose members satisfy the 'field axioms', defined by you and the book?

> The set of rational numbers, and set of real numbers are just two examples of sets which satisfy the 'field axioms', and can therefore be called 'fields' rather than sets?

I also understand that a field can have a finite number of elements, but I'm still unsure as to how this fits in with prime numbers, and modular arithmetic (I understand how this arithmetic works, in the case of a clock for example, but not in this context).

I'm also unsure as to why we might need to have only a finite number of members within a field. In terms of application it just seems that using Real numbers should have it all covered?

-Also can linear algebra be performed only for 'fields' (sets satisfying the 'field axioms'), and nothing else? This is why it is important to define a set as a 'field' before we attempt to perform linear algebra on it?

Thanks,
bit4bit

6. Okay, so it looks like this text covers what I'd hoped would be in the book. It looks like it's probably a similar level to the text I was suggesting, so I think you should be fine with it.

So, yes, a field is a set which satisfies the field axioms, and therefore the reals and the rationals are fields (whereas, say, the integers are not).

Now let me introduce an idea which is very important for finite fields. The characteristic of a field is the smallest positive integer n so that adding 1 to itself n times gives you zero. If no such integer exists, we say the field has characteristic 0. The reals and the rationals certainly have characteristic 0. But finite fields always have nonzero characteristic, as there are only finitely many elements (this isn't quite the whole proof).

Now let F be a field of nonzero characteristic n. If n were composite, then it would have a nontrivial factorization ad = n (1 < a, d < n). Since n is the smallest integer such that n*1 (i.e., adding 1 to itself n times) is 0, we must have a*1 and d*1 are not zero. But then (a*1)(d*1) = n*1 = 0. On the other hand, we have that a*1 and d*1 have multiplicative inverses x and y, respectively, so we can multiply the equation by these: 1 = y*x(a*1)(d*1)=x*y*0. You can show that anything times 0 is 0, and so we find 1 = 0, contradicting the field axiom that 1 and 0 are different. Thus n must be prime.

Now arithmetic mod p (p prime) gives the prototypical example of a finite field--the hardest thing to show is that multiplicative inverses exist, but this is basically a result of the finiteness of the set Z_p = {0, 1, ..., p-1} and the fact that, if p divides the product ab, then p divides a or b. So that's where the connection between primes and modular arithmetic comes in. And then you can show the only possible sizes of finite fields are prime powers--basically, any finite field F must contain a subset that looks like Z_p. You can then show that F is a finite dimension vector space over Z_p, from which the structure of vector spaces tells you that the size of F is p raised to the dimension of F over Z_p.

The reals do not contain any of the finite fields Z_p (in the sense that the operations on reals are operations on Z_p cannot be made to match up with each other). So, for mathematical applications, we need to consider all fields and not just the reals. In real world applications, linear algebra over the reals (or maybe complex numbers) is usually enough, but finite fields have many applications nowadays. Many encryption algorithms are based on finite fields-Diffie-Hellman key exchange and ECC, for example. I suppose linear algebra over finite fields doesn't necessarily enter into these considerations, though...

Linear algebra can be done over more general structures called rings, which are basically like fields except not all elements may have multiplicative inverses, multiplication may not commute, and there may not even be a multiplicative identity. However, the theory is the nicest over fields and requires the least abstract algebra to do it, so that's why this book is sticking to fields. NOTE: in order to do linear algebra "over a set", it must come equipped with operations addition and multiplication and must be "closed" under these operations (sums and products of elements are in the set). This basically forces us to work with rings (I can't think of a reason additive inverses are forced on us... probably so we can at least hope to solve equations when the determinant is invertible in the ring).

7. hmm.. you've certainly given me much to think about here, and I'm afraid I'm too tired at the moment to think about it all thoroughly.

I'll get back tomorrow when I've had some sleep

8. may i ask what makes a field different from a group?

9. The field admits of two closed operations, + and ×, two identities, 0 and 1 and two inverses -x and 1/x. A group has only one of each. Note that, whereas the field operations are invariably(?) arithmetic addition and multiplication, the group operation need not be either. Matrix multiplication, for example, for the permutation and rotation groups springs immediately to mind.

In a loose sort of way, you can think of a field as being two abelian groups (each with the same group elements) superimposed. And I did say loose!

10. thank you for clearing that up.

11. Wallaby: A group satisfies the first three axioms I listed above. As Guitarist said, we only specify one operation, and it need not be commutative. A field has much more structure. Its additive structure does describe a group (an abelian one at that), but it also has a multiplicative structure. Any set with two operations which satisfies axioms 1-4, 7, and 9 (along with the right distribution rule, dual to the left distribution in axiom 9) is a ring. Fields go a bit further and provide multiplicative identity, commutivity, and invertibility of nonzero elements. Like guitarist said, the nonzero elements thus form an abelian group. So, in a sense, a field is the superposition of two abelian groups, one being the whole set, one being the set minus 0, and we specify that the operations cooperate with each other via distribution.

Guitarist: What do you mean by "field operations are invariably(?) arithmetic addition and multiplication"? Abstractly, I would say that field operations are not arithmetic (or, maybe, just as arithmetic as group operations are). Also, I can think of descriptions of the complex numbers in terms of real matrices, in which case multiplication is given by matrix multiplication, which you give as an example of a "non-arithmetic operation". But I suppose that we can construct any field from the integers by using polynomials, algebraic quotients, and quotient fields, in which case fields do have an underlying arithmetic structure (e.g., any field contains Z or Z/pZ for some p).

12. Originally Posted by serpicojr
Guitarist: What do you mean by "field operations are invariably(?) arithmetic addition and multiplication"?
I wasn't sure of the truth of that assertion, hence the parenthetic query.
PS by edit: On reflection, yes, I am sure about that; fields admit of only arithmetic operations.
Also, I can think of descriptions of the complex numbers in terms of real matrices,
Umm. I'm not sure I can, but I need to think about that. Remember the reals are a subset of the complexes, so how can you describe a superset in terms of its subset? Dunno, you may be right.
any field contains Z or Z/pZ for some p.
I have a concern about this statement. Maybe you have a text to hand, I don't, but it doesn't smell quite right to me.

13. You're going to have to clarify what you mean by "fields admit of only arithmetic operations". That was my original question. To be more to the point, what do you mean when you call an operation "arithmetic"?

Here's the high brow way to think of the complex numbers as matrices over the reals: the complex numbers are a real vector space of dimension 2. They also happen to act on themselves by multiplication, and this is a real linear map. Thus you have an injection of the complex numbers into 2x2 real matrices, and it's easy to check that this is actually a ring homomorphism (i.e., the sum of the linear maps induced by multiplication by z and w is the linear map induced by multiplication by (z+w), and the product of the linear maps induced by z and w is the linear map induced by zw). Working this out concretely, we obtain the low brow way of thinking about this: let I be the 2x2 identity matrix, J the 2x2 matrix with 0's on the diagonal and 1, -1 on the antidiagonal. Then J^2 = -I. Consider the real span of these matrices--i.e., matrices aI+bJ, a, b real. It's easy to show this is a ring of matrices isomorphic to C.

(How can you describe a superset in terms of its subset? That's how the complex numbers are defined in the first place--in terms of reals! They're formal real linear combinations of vectors 1 and i, where we define i^2 = -1 and extend multiplication linearly--that's a definition completely in terms of real numbers.)

As for the "every field contains Z or Z/pZ" statement, consider this: let R be a ring with identity 1. There is a unique homomorphism of rings from Z to R mapping 1 to 1. Thus, using isomorphism theorems, there is a proper ideal of Z, say, nZ such that Z/nZ is isomorphic to the image of Z in R. We can think of this as an inclusion of Z/nZ in R, and so we just view Z/nZ as a subset of R. If R=F is a field, then Z/nZ must be an integral domain--i.e., ab = 0 implies a = 0 or b = 0. This implies n = 0 or n = p is prime. Thus F contains Z or contains Z/pZ. In fact, F has characteristic 0 iff it contains Z, and it has characteristic p iff it contains Z/pZ. (Relatedly, the prime field is the smallest subfield contained in a field, and the result I just proved shows the prime field is either Q (characteristic 0) or Z/pZ (characteristic p).)

14. Hi there, sorry I haven't got back yet I've been tied down with job interviews and such.

Now let me introduce an idea which is very important for finite fields. The characteristic of a field is the smallest positive integer n so that adding 1 to itself n times gives you zero. If no such integer exists, we say the field has characteristic 0. The reals and the rationals certainly have characteristic 0. But finite fields always have nonzero characteristic, as there are only finitely many elements (this isn't quite the whole proof).
Is this where the idea of modular addition and multiplication come in to play?...i.e. the only way n*1=0 could be true, without having n=0, is if the numbers within the finite set approach a modulus and then 'loop' back around again, like the numbers on a clock...e.g. these numbers will form a finite field of 13 elements (0,1,2,...,12), where 12 is the modulus (such that 10+10=8, 3*9=3, and 12*1=0)? So in the case of a finite field, the 'characteristic', n, will be the modulus, and for an infinite field (like the real numbers), the only way n*1=0 could be true is indeed if n=0? Also, although the arithmetic in a finite field is modular, it still satisfies the field axioms and is therefore still a field? Have I got all of that right?

Now let F be a field of nonzero characteristic n. If n were composite, then it would have a nontrivial factorization ad = n (1 < a, d < n). Since n is the smallest integer such that n*1 (i.e., adding 1 to itself n times) is 0, we must have a*1 and d*1 are not zero. But then (a*1)(d*1) = n*1 = 0. On the other hand, we have that a*1 and d*1 have multiplicative inverses x and y, respectively, so we can multiply the equation by these: 1 = y*x(a*1)(d*1)=x*y*0. You can show that anything times 0 is 0, and so we find 1 = 0, contradicting the field axiom that 1 and 0 are different. Thus n must be prime.
Ok, so F is a finite field with n+1 elements, and a characteristic (or 'modulus') of n. So if n is composite it must have factors other than 1 and itself (else it is a prime); hence, ad = n (1 < a, d < n)? Since n is the smallest integer in the field that equals 0 when multiplied by 1 (i.e. n*1=0), and since a and d must be less than n, then neither a nor d can equal n and hence, neither a*1 nor b*1 can equal 0? Also, using multiplicative inverses you showed that a similar expression is found where two non-zero values multiply to equal zero, and therefore since this cannot be true, then n must be prime? OK I think I understand all of that, just want to be sure.

Now arithmetic mod p (p prime) gives the prototypical example of a finite field--the hardest thing to show is that multiplicative inverses exist, but this is basically a result of the finiteness of the set Z_p = {0, 1, ..., p-1} and the fact that, if p divides the product ab, then p divides a or b. So that's where the connection between primes and modular arithmetic comes in. And then you can show the only possible sizes of finite fields are prime powers--basically, any finite field F must contain a subset that looks like Z_p. You can then show that F is a finite dimension vector space over Z_p, from which the structure of vector spaces tells you that the size of F is p raised to the dimension of F over Z_p.
I'm afraid you've lost me on the highlighted part.

The reals do not contain any of the finite fields Z_p (in the sense that the operations on reals are operations on Z_p cannot be made to match up with each other). So, for mathematical applications, we need to consider all fields and not just the reals. In real world applications, linear algebra over the reals (or maybe complex numbers) is usually enough, but finite fields have many applications nowadays. Many encryption algorithms are based on finite fields-Diffie-Hellman key exchange and ECC, for example. I suppose linear algebra over finite fields doesn't necessarily enter into these considerations, though...
Well I was hoping you'd get to this. You have confirmed exactly what I was previously thinking in terms of applications. I suppose the main applications of finite fields in linear algebra is therefore in computing then. I have been put of by all of this 'discrete' mathematics before (In fact I thought it was generally quite silly ). Are you saying though, that when solving a linear algebra problem, for example a couple of simultaneous equations as described above, which exist in Real, 3-dimensional space, then I would not only have to consider the solution over the Real field, but also over all fields? I'm still not quite sure as to why.

Linear algebra can be done over more general structures called rings, which are basically like fields except not all elements may have multiplicative inverses, multiplication may not commute, and there may not even be a multiplicative identity. However, the theory is the nicest over fields and requires the least abstract algebra to do it, so that's why this book is sticking to fields. NOTE: in order to do linear algebra "over a set", it must come equipped with operations addition and multiplication and must be "closed" under these operations (sums and products of elements are in the set). This basically forces us to work with rings (I can't think of a reason additive inverses are forced on us... probably so we can at least hope to solve equations when the determinant is invertible in the ring)
Hmmm.. To fully understand this linear algebra I am embarking upon (and eventually multivariable calculus) then I must extend this knowledge of abstract algebra to ring theory as well? Honestly now, is this abstract algebra really necessary when facing practical problems in linear algebra and multivariable calculus? I think I'm coming to realize that I want to try and stay away from this 'abstract algebra' as much as possible. It almost seems like its 'trying' to be 'mathematics' but doesn't quite qualify. I guess it is helpful for data handling applications in computing and the like, But it doesn't really seem like such an important calculation tool in most other applications.

Nonetheless, thanks for all the helpful replies :wink: . (I might not be back again for a few days now - I'm going to stay with some relatives for a bit.)

Catch ya later,
bit4bit

15. First, let me just say that I'm discussing linear algebra in greater generality than you probably need it. For example, I doubt you'll need the abstract theory of rings for your applications, or even general fields--I imagine everything you're dealing with will be real or at worst (or, depending on how you look at it, at best) complex. But you are asking mathematically interesting questions, and so I'm trying to provide the mathematician's answer to them .

Is this where the idea of modular addition and multiplication come in to play?...i.e. the only way n*1=0 could be true, without having n=0, is if the numbers within the finite set approach a modulus and then 'loop' back around again, like the numbers on a clock...e.g. these numbers will form a finite field of 13 elements (0,1,2,...,12), where 12 is the modulus (such that 10+10=8, 3*9=3, and 12*1=0)?
You're recollection of modular arithmetic is a little off--you're off by one, to be specific. If we're doing modular arithmetic to the modulus m, then we usually consider the set {0,1,2,...,m-1}. I like to think of this as the set of remainders when you divide by m, and modular arithmetic is just how remainders behave under addition and multiplication. We could include m in our set, but then we'd have to exclude its remainder when divided by m--namely, 0. In general, we can consider any set of m integers which have different remainders when you divide by m. So, your example above should have either been the set {0,...,12} with the modulus 13, so that 10+10 = 7, 3*9 = 1, and 13*1 = 0, or it should have been the set {0,...,11} with the modulus 12 so that the equations you wrote above held. The first case is a field, the second case is not (3*4 = 12 = 0, but neither 3 or 4 is 0 modulo 12).

So in the case of a finite field, the 'characteristic', n, will be the modulus, and for an infinite field (like the real numbers), the only way n*1=0 could be true is indeed if n=0? Also, although the arithmetic in a finite field is modular, it still satisfies the field axioms and is therefore still a field? Have I got all of that right?
The characteristic is very much like the modulus--in fact, it's one and the same if we realize the finite field of p elements via modular arithmetic modulo p. (Just as you suggest, modular arithmetic to a prime modulus happens to satisfy the field axioms, and so that's how the connection is made.) But this connection falls apart when we consider general fields of characteristic p, of which there are many. Given any natural number n, there is a unique field of p^n elements of characteristic p. This cannot be realized as modular arithmetic over the integers (although it can be realized as modular arithmetic over more general rings... don't worry about this!). And there are infinite fields of characteristic p: for example, take the field of rational functions with coefficients in Z/pZ, the field of p elements. But if a field has characteristic 0, it must be infinite, as it "contains the integers".

This is all I have time to say right now, but let me conclude that you're correct that the major real world applications of finite fields are in computing. Finite fields can be used to prove neat arithmetic and geometric facts, but I can't think of anything of this sort that's useful outside of computer applications.

So why study finite fields? Because they exist! (Not to mention the theory is divinely beautiful in its simplicity.)

16. ..haha.. to quote a joke:

"Someone with a science degree says "why does it work" , someone with an engineering degree says "how does it work", someone with an accounting degree says "how much will it cost", and someone with an art degree says "would you like fries with that?" " :P

Perhaps I am more interested in the 'how' than the 'why'. (?)

Thanks for clearing up my misunderstanding about modular arithmetic and so forth anyway, I can see where I went wrong now.

I must admit some of what has been discussed in this thread has been very interesting indeed. The proof of finite fields having to be of a prime characteristic was especially elegant, however, the thing with me is application, application, application! I like to be able to build something that I can touch and use.

Maybe deep down somewhere I do have an liking to this kind of thing, but it hasn't surfaced of yet anyway.

Thanks again,
bit4bit

17. I'm more than happy to help, as I love math and talking about math. So please, feel free to continue asking any questions you may have during your studies of linear algebra.

I should have suggested this in the first place, but you can probably just skip any questions or considerations about finite fields (or nonzero characteristic), or insert "real numbers" whenever a result talks about general fields. Another possibility is for you to pick up a "linear algebra for engineers" or "applied linear algebra" type text. This is not to suggest that you can't deal with the abstract mathematical version of linear algebra; rather, this will allow you to skip over the stuff you really don't need with ease.

The fact is that most everything works out the same over any field except for two major problems:

1. In fields of nonzero characteristic, you can't divide by the characteristic. So, for example, I can't divide by 2 when working modulo 2, and this means that I can't, say, take averages of an even number of operators. But this, of course, is not a problem for the real numbers, where every integer is (thankfully) invertible. This is really quite a useful property and is often taken for granted! Of course, this is not a thing you'll be concerned with in real world applications.

2. Most fields are not algebraically complete--i.e., you can't solve all polynomials over that field. When you learn about "eigenvectors" and "eigenvalues" (quick definition: if an operator acts on a vector via a scalar, that's an eigenvector, and the scalar is an eigenvalue), you'll find out that this theory works easiest over algebraically complete fields. Unfortunately, the reals are not. However, the complex numbers are. This is why I keep suggesting that the complex numbers are an important case for you to keep in mind--this case is actually useful in the real world (e.g., in diff eq's).

18. Originally Posted by serpicojr
Linear algebra can be done over more general structures called rings, which are basically like fields except not all elements may have multiplicative inverses, multiplication may not commute, and there may not even be a multiplicative identity. However, the theory is the nicest over fields and requires the least abstract algebra to do it, so that's why this book is sticking to fields. NOTE: in order to do linear algebra "over a set", it must come equipped with operations addition and multiplication and must be "closed" under these operations (sums and products of elements are in the set). This basically forces us to work with rings (I can't think of a reason additive inverses are forced on us... probably so we can at least hope to solve equations when the determinant is invertible in the ring).
Hey serpicojr, when limited to modules over rings (i.e. a "vector space" where instead of a field you have a ring - for those who don't know the jargon) you lose the notion of a basis. I can't remember if you mentioned this fact.

19. Yeah, I didn't want to get into details, as our friend who's asking the questions is only learning linear algebra over fields and doesn't have abstract algebra at his disposal. The "the theory is the nicest over fields" statement was the deepest I wanted to get.

20. an happy to help, as I love math and talking about math. So please, feel free to continue asking any questions you may have during your studies of linear algebra.

I should have suggested this in the first place, but you can probably just skip any questions or considerations about finite fields (or nonzero characteristic), or insert "real numbers" whenever a result talks about general fields. Another possibility is for you to pick up a "linear algebra for engineers" or "applied linear algebra" type text. This is not to suggest that you can't deal with the abstract mathematical version of linear algebra; rather, this will allow you to skip over the stuff you really don't need with ease.
Well thanks alot, I'm sure there will be many more questions coming soon

2. Most fields are not algebraically complete--i.e., you can't solve all polynomials over that field. When you learn about "eigenvectors" and "eigenvalues" (quick definition: if an operator acts on a vector via a scalar, that's an eigenvector, and the scalar is an eigenvalue), you'll find out that this theory works easiest over algebraically complete fields. Unfortunately, the reals are not. However, the complex numbers are. This is why I keep suggesting that the complex numbers are an important case for you to keep in mind--this case is actually useful in the real world (e.g., in diff eq's).
I have a brief knowledge of complex numbers already, enough to get by as it were, so that should help. Also When performing linear algebra over a complex field is that what is referred to as complex analysis? I've heard that term tossed around quite a bit.

21. Roughly speaking, complex analysis is "calculus over the complex numbers" (in the sense that you study complex differentiable functions, integration in the complex plane, and power series, among other things). It's a really beautiful theory (for example, it provides a super-short proof of the fact that the complex numbers are algebraically complete), but it's also not necessary for basic linear algebra.

 Bookmarks
##### Bookmarks
 Posting Permissions
 You may not post new threads You may not post replies You may not post attachments You may not edit your posts   BB code is On Smilies are On [IMG] code is On [VIDEO] code is On HTML code is Off Trackbacks are Off Pingbacks are Off Refbacks are On Terms of Use Agreement