Notices
Results 1 to 47 of 47

Thread: Algebraic Number Theory

  1. #1 Algebraic Number Theory 
    Forum Professor serpicojr's Avatar
    Join Date
    Jul 2007
    Location
    JRZ
    Posts
    1,069
    Alright, I'm going to start a new thread on algebraic number theory. This is my area of research, and so I have a lot of passion for the subject. I'll be working out of Neukirch's text on the subject.

    ------------------------------------------

    So algebraic number theory is "algebraic" for two main reasons. First, it studies so-called algebraic numbers, which I'll define shortly. Second, many of the tools used in it come from algebra: at first linear algebra and Galois theory play a major role, with representation theory and group cohomology becoming the main tools of study as we progress (although I'm not so sure we'd make it so far in this thread). However, there is a lot of geometry, analysis, and topology floating around, too.

    Our studies will look like very simple commutative algebra at first. This is because we're mostly dealing with rings which have very nice properties--fields (and a pretty restrictive set of fields, at that) and nice subrings of fields. But at some point we have to start using the special properties of the fields we're looking at, and this is where a lot of the meat of the theory and some of the less algebraic methods start popping up.

    What do you, the reader, have to know? Linear algebra, basic abstract algebra, and basic Galois theory will be assumed, as will be some basic facts about the topology of the real and complex numbers and metric spaces. Unfortunately, you'll be left behind if you don't know this material.

    ------------------------------------------

    Now let me develop some of the basic questions we'll be asking. An algebraic number is a number which satisfies a polynomial with coefficients in Q, the rationals. 2<sup>1/2</sup> and i are some famous algebraic numbers, the former satisfying x<sup>2</sup>-2, the latter x<sup>2</sup>+1.

    Any algebraic number α (that's supposed to be an alpha) determines a finite field extension of Q, the field K = Q(α). In general, we'll call a finite extension of Q a number field. I'm assuming you've seen this before if you're reading this, but let's recall that this ring is the same thing as:

    -the set of numbers f(α)/g(α), where f(x), g(x) are polynomials with coefficients in Q, g(α) ≠ 0
    -the set of numbers f(α), where f(x) is a polynomial with coefficients in Q
    -the quotient ring Q[x]/(h(x)), where h(x) is a minimal polynomial for α over Q (and h(x) is unique if we stipulate that h(x) is monic, i.e. its leading coefficient is 1)

    In a sense, we can reduce the study of α to the study of the number field K. Let's recall here that the K is a vector space of dimension n = deg(h(x)) over Q, and so we define the degree of the extension K over Q to be [K:Q] = n.

    Note that any algebraic satisfies a polynomial with coefficients in Z. Indeed, if α satisfies h(x), let c be the least common multiple of the denominators of the coefficients of h(x). Then ch(x) has integer coefficients. If in fact α satisfies a monic polynomial with integer coefficients, we will call α an algebraic integer. We will learn shortly that the set of algebraic integers contained in K is a ring, and we'll call this the ring of integers of K. The ring of integers contains all of the information that we need to study K, and the fact that it doesn't have inverses makes some of the algebraic properties of K more evident. So we'll restrict a lot of our attention to this ring.

    When K = Q, we naturally have that the ring of integers is just... the ring of integers, Z. One of the main results in elementary number theory is the Fundamental Theorem of Arithmetic--namely, given any nonzero number n ∈ Z, we can write uniquely (up to reordering of the factors):

    n = ep<sub>1</sub><sup>r<sub>1</sub></sup>...p<sub>k</sub><sup>r<sub>k</sub></sup>

    where e = ±1, the p<sub>i</sub> are distinct prime numbers, and r<sub>i</sub> ≥ 1. So a natural question is... what is the analog of FTA for rings of integers of number fields?

    The first problem is that unique prime factorization of elements does not hold. However, using the theory of ideals, we can come up with a suitable approximation to unique factorization. Our first question is then:

    1. How can we approximate unique factorization, and how bad does unique factorization fail?

    It turns out we can quantize this via a gadget called the class group.

    The next problem is that if two numbers have the same factorization in unspecified sense that I suggest above, their quotient may not be ±1. This is because there will generally be a great many algebraic integers u such that u<sup>-1</sup> is also an algebraic integers. These algebraic integers will be called units, and they form a multiplicative group, the group of units of K. Our next question is then:

    2. What is the structure of the group of units of K?

    This question is answered by Dirichlet's Unit Theorem, which tells us quite a bit about the structure of the group.

    ------------------------------------------

    Let me know if you're interested and I'll keep going!


    Reply With Quote  
     

  2.  
     

  3. #2  
    Forum Professor river_rat's Avatar
    Join Date
    Jun 2006
    Location
    South Africa
    Posts
    1,517
    Count me interested - I can barely remember most of this.

    Just a quick question, the unique factorization is for elements of the ring of integers in our field i assume? I know this question came about in relation to Fermat's last theorem, do you know the details?


    As is often the case with technical subjects we are presented with an unfortunate choice: an explanation that is accurate but incomprehensible, or comprehensible but wrong.
    Reply With Quote  
     

  4. #3  
    Moderator Moderator
    Join Date
    Jun 2005
    Posts
    1,620
    As I am sure it will be good for me, count me also as a reluctant assenter!
    Reply With Quote  
     

  5. #4  
    Forum Professor serpicojr's Avatar
    Join Date
    Jul 2007
    Location
    JRZ
    Posts
    1,069
    Quote Originally Posted by river_rat
    Just a quick question, the unique factorization is for elements of the ring of integers in our field i assume?
    Indeed!

    I know this question came about in relation to Fermat's last theorem, do you know the details?
    This is true, and I should know the details, but unfortunately I don't. I believe it has something to do with factorization in cyclotomic fields.
    Reply With Quote  
     

  6. #5  
    Forum Professor sunshinewarrior's Avatar
    Join Date
    Sep 2007
    Location
    London
    Posts
    1,525
    On this stuff I don't have enough maths to follow it, so I shall just sit on the sidelines and watch. Fascinating stuff, though, and thanks for doing it. Perhaps some day I can do notes on the short stories of Rudyard Kipling or some such...
    Reply With Quote  
     

  7. #6  
    Forum Professor serpicojr's Avatar
    Join Date
    Jul 2007
    Location
    JRZ
    Posts
    1,069
    Let me start with an example. Let let's K = Q(i). I claim the ring of integers is Z[i]. Indeed, to see this, suppose z = x+iy is an algebraic integer. If y = 0, then x is a rational number, and so x must be a rational integer. (Exercise 1: prove this!) So let's assume y≠0. Note that z is a root of the polynomial:

    h(X) = X<sup>2</sup>-2xX+x<sup>2</sup>+y<sup>2</sup>

    This polynomial clearly has rational coefficients. Since z is irrational, h(X) must be irreducible, and so it's the (monic) minimal polynomial of z. z is also an algebraic integer, so z satisfies some monic polynomial f(X) with integer coefficients. h(X) must divide f(X) due to minimality. But there is a theorem (due to Gauss?) that says a monic polynomial with coefficients in an integral domain factors over its field of fractions iff it factors over the domain. Thus h(X) must have rational integer coefficients.

    Now we know 2x and x<sup>2</sup>+y<sup>2</sup> are integers, and we wish to show that x and y are integers. There are a lot of ways to do this, and they're all pretty meandering. You just have to play around and find something that works. So x is either an integer or it's half an (odd) integer. If it's half an integer, then for x<sup>2</sup>+y<sup>2</sup> to be an integer, y must also be half an integer. So there are integers m, n such that:

    x = (2m+1)/2 = m+1/2
    y = (2n+1)/2 = n+1/2

    Note that:

    x<sup>2</sup> = m<sup>2</sup>+m+1/4
    y<sup>2</sup> = n<sup>2</sup>+n+1/4

    x<sup>2</sup>+y<sup>2</sup> = m<sup>2</sup>+m+n<sup>2</sup>+n+1/2

    But this contradicts that x<sup>2</sup>+y<sup>2</sup> is an integer. So x must be an integer. This implies that y<sup>2</sup> and hence y are integers.

    Z[i] is called the ring of Gaussian integers. Unlike most rings of integers, the Gaussian integers are pretty simple. They satisfy unique factorization, and there are only finitely many units. So they don't exhibit any of the weird behavior I described before, but this is all good and well, as it allows me to introduce some other gadgets we'll be using in a more general setting.

    Let's introduce one right now. The norm of a Gaussian integer z = x+iy is the nonnegative real number N(z) = (x+iy)(x-iy) = x<sup>2</sup>+y<sup>2</sup>. This is the square of the usual complex modulus, and we've already seen this number as the constant term in the minimal polynomial for z. Note that N is multiplicative--letting * denote complex conjugation, we have:

    N(zw) = (zw)(zw)* = zz*ww* = N(z)N(w)

    We can immediately use the norm to tell us about the units of Z[i]. Recall that an element u of a (commutative) ring (with identity) R is a unit it has a multiplicative inverse in R. The following is a specific instance of a more general fact.

    Theorem: u is a Gaussian unit iff N(u) = 1.

    <u>Proof</u>: If u is a Gaussian unit, then u<sup>-1</sup> is also a Gaussian integer. So N(u) and N(u<sup>-1</sup>) must be nonnegative integers. But N(u)N(u<sup>-1</sup>) = N(uu<sup>-1</sup>) = N(1) = 1, and so we must have N(u) = 1.

    Conversely, if N(u) = 1, then if we let * denote complex conjugation, we have uu* = 1. u* = u<sup>-1</sup>, and since u is in Z[i], we must have u* is, too. Thus u is a unit. ☐

    We can now completely determine the units of Z[i]. I leave the determination of this finite set as Exercise 2.
    Reply With Quote  
     

  8. #7  
    Forum Professor river_rat's Avatar
    Join Date
    Jun 2006
    Location
    South Africa
    Posts
    1,517
    But there is a theorem (due to Gauss?) that says a monic polynomial with coefficients in an integral domain factors over its field of fractions iff it factors over the domain.
    Yep, Gauss's polynomial lemma i think its called.

    This is true, and I should know the details, but unfortunately I don't. I believe it has something to do with factorization in cyclotomic fields.
    I'll consider it homework
    As is often the case with technical subjects we are presented with an unfortunate choice: an explanation that is accurate but incomprehensible, or comprehensible but wrong.
    Reply With Quote  
     

  9. #8  
    Forum Professor serpicojr's Avatar
    Join Date
    Jul 2007
    Location
    JRZ
    Posts
    1,069
    Alright, here's a long one. Let me know if it's too much for one sitting and I'll try to keep things shorter in the future.

    ----------------------------------------------

    It's important to think of the norm as representing a size. This is useful in both geometric and algebraic senses. For example, in the special case of the Gaussian integers we have the

    Proposition (Division Algorithm): Let z and w be Gaussian integers, w ≠ 0. Then there exist Gaussian integers q and r with N(r) < N(w) and z = wq+r.

    Note: When I apply this proposition, I'll often say I'm dividing z by w. I'll call q the quotient, r the remainder.

    <u>Proof</u>: Let v = z/w. v is some number in Q(i). Now picture the complex plane, and imagine the Gaussian integers sitting in the complex plane as the square lattice, i.e. all of the points in the plain with integer coordinates. Any complex number is within sqrt(2)/2 of a Gaussian integer. (Note that the center of each square has maximum distance from the vertices, and this distance is sqrt(2)/2.) Let q be such an integer. Then since the distance between v and q is less that sqrt(2)/2, N(v-q) < 1/2 (here we take the natural extension of N to all elements of K). Multiplying by N(w), we have N(z-wq) = N(wv-wq) = N(w)N(v-q) < N(w)/2 < N(w). Then let r = z-wq.☐

    Let's recall some algebraic terminology:
    -An integral domain (or simply a domain) is a commutative ring with identity and no zero divisors.
    -A principal ideal domain (or a PID) is a domain in which all ideals are principal.
    -Let D be a domain.
    --An element d of D is irreducible if whenever you write d = ab as a product of elements a, b of D, then either a or b is a unit (equivalently, if a is not a unit, then b is).
    --Two elements d and d' are associate to each other if there is a unit u such that d = ud' (equivalently, if d and d' divide each other).
    --An element p of D is prime if it is nonzero, not a unit, and whenever p divides a product ab of elements a, b of D, then p divides a or p divides b (equivalently, if p doesn't divide a, then p divides b).
    -A unique factorization domain (or a UFD) is a domain in which every element can be written uniquely as a product of irreducible elements, where uniqueness is in the sense that between any two factorizations there is a bijection between the terms of each factorization such that corresponding terms are associate to each other.

    And let's recall some facts:
    -Every PID is a UFD.
    -In a UFD, primes and irreducible elements are the same.

    So the Division Algorithm implies that the Gaussian integers are a special kind of domain called a Euclidean domain. I'd rather not spend time talking about such rings, since most rings of integers are not Euclidean. In fact, the property of being Euclidean is more stringent than that of being a PID:

    Proposition: Any Euclidean domain is a PID and hence a UFD.

    <u>Proof</u>: I'll just prove this for the Euclidean domain Z[i]. The general proof is virtually the same and really only requires the definition of Euclidean domain.

    So let I be an ideal of Z[i]. If I = (0), I is prinipal, so assume I ≠ (0). Choose an element w of I of minimal norm greater than 0, which exists because I ≠ (0). I claim that I = (w), the ideal generated by w. If not, then there is an element z in I that is not in (w). Dividing z by w, we have that there exist q and r with N(r) < N(w) and z = wq+r. wq is in I since w is in I, and so r = z-wq is in I. But r has smaller norm than w, contradicting that w has minimal norm. So I = (w). Thus I is a principal ideal domain. ☐

    So now we know that the Gaussian integers have unique factorization, so let's figure out what the prime elements are. The first thing to notice is that if π is a prime element, the N(π) is a rational integer divisible by π. Thus to find all prime elements, it suffices to seek prime divisors of nonzero rational integers. Now if a Gaussian prime divides a nonzero rational integer n, then by factoring n into rational primes and applying the definition of a prime element of the Gaussian integers, we see that π must divide some rational prime p. Thus N(π) divides N(p) = p<sup>2</sup>. Since π is not a unit, we must have N(π) = p or p<sup>2</sup>.

    Conversely, suppose that z is a Gaussian integer and N(z) = p is a rational prime. We'll show that z is irreducible and hence prime. Indeed, if z = wv, w and v Gaussian integers, then N(w)N(v) = N(z) = p. So N(w) = 1 or N(w) = p. If w is not a unit, then N(w) = p, and hence N(zv) = 1. Thus v is a unit, and hence z is irreducible and prime.

    Now we will characterize all Gaussian primes by factoring rational primes p into Gaussian primes.

    Case I: Assume there exists π with N(π) = p. Then π is prime. Note that N(π*) = p so that π*, too, is prime. There are two possibilities: π and π* are associate, or they're not (which we'll call distinct).

    Let π = x+iy. If π and π* are associate, then we must have that π* = x-iy is equal to one of x+iy, -x-iy, -y+ix, y-ix. The first possibility implies y = 0, while the second implies x = 0, either of which implies that p is a rational square, a contradiction to p being a rational prime. The third possibility implies x = -y, while the fourth implies x = y; in either case, we have p = 2x<sup>2</sup>, and so we must have x = ±1 and N(π) = 2. In particular, note that 2 = (1+i)(1-i) is the product of associate primes. Otherwise, p = ππ* is the product of distinct primes.

    Case II: Assume there does not exist π with N(π) = p. p is divisible by some Gaussian prime π, and so N(π) = p<sup>2</sup>. But you'll show in Exercise 3 that this implies π and p are associate, so that π = ±p or ±ip. Thus p is a prime.

    We can reformulate these results in terms of ideal-theoretic language. In a ring R, the product of the ideal I and J is the ideal (Exercise 4):



    If R is a PID, then I = (a), J = (b) for some a, b in R, and you can show that (a)(b) = (ab) for Exercise 5. Thus we have:

    IA: (2) = (1+i)<sup>2</sup> ramifies, being the square of a prime.
    IB: If N(π) = p for some π, then (p) = (π)(π*) splits into distinct primes.
    II: Otherwise, (p) remains inert, i.e. it's still prime.

    Here I use the word "prime" loosely to describe an ideal generated by a prime element, but indeed we have the general notion of a prime ideal. An ideal P is prime if IJ ⊂ P implies I ⊂ P or J ⊂ P (equivalently, I ⊄ P implies J ⊂ P). Then ideals generated by prime elements in a UFD are prime in this sense. Note that, in a PID, unique factorization implies that all ideals factor into a unique (up to order) product of prime ideals. We will find that all rings of integers of number fields satisfy this property.
    Reply With Quote  
     

  10. #9  
    Forum Professor serpicojr's Avatar
    Join Date
    Jul 2007
    Location
    JRZ
    Posts
    1,069
    One natural thing to ask is what we can say about a prime given its factorization in the Gaussian integers. So suppose p is a norm, i.e. p = x<sup>2</sup>+y<sup>2</sup>. If we assume x, y nonnegative, then we have 0 < x, y < p<sup>1/2</sup>, so in particular p divides neither x nor y. So it's natural to look at this modulo p:

    x<sup>2</sup>+y<sup>2</sup> = 0 (mod p)
    x<sup>2</sup> = -y<sup>2</sup> (mod p)
    (x/y)<sup>2</sup> = -1 (mod p)

    So x/y is a square root of -1 modulo p. This is interesting. This means that if a prime splits in the Gaussian integers, then the polynomial X<sup>2</sup>+1 splits modulo p as (X-x/y)(X+x/y).

    Conversely, suppose -1 is a square modulo p. Then there is 0 < k < p such that k<sup>2</sup> = -1 (mod p), i.e. k<sup>2</sup> + 1. We may assume that 0 < k ≤ (p-1)/2 by replacing k by -k modulo p. Then note that 1 < k<sup>2</sup> + 1 < p. So p divides k<sup>2</sup>+1 exactly (i.e., no higher power of p divides k<sup>2</sup>+1). Any prime π dividing p must divide k<sup>2</sup>+1, and so either π or π* divides k+i. But then N(π) divides N(k+i) = k<sup>2</sup>+1, and so N(π) = p (as the other option is p<sup>2</sup>, which does not divide k<sup>2</sup>+1).

    Note that this implies that if a prime p is inert in the Gaussian integers, then the polynomial X<sup>2</sup>+1 is irreducible modulo p!

    Recall that 2 ramifies as the square of a Gaussian prime. Note that X<sup>2</sup>+1 = (X+1)<sup>2</sup> modulo 2--ramification! This completes a demonstration of the

    Fact: The factorization of a rational prime p in the Gaussian integers mimics the factorization of the polynomial X<sup>2</sup>+1 modulo p.

    So when is -1 a square modulo p? Let's investigate the group structure of the group of units modulo p. This is a cyclic group of order p-1. Now the group of squares has order (p-1)/2, and so -1 being a square implies that 2 must divide (p-1)/2, i.e. 4 divides p-1, i.e. p = 1 (mod 4). Conversely, if p = 1 (mod 4), then (p-1)/2 is even, and hence the group of squares must have an element of order 2. -1 is the unique element of order 2 in the full group, so it must be the element of order 2 in the group of squares. This proves the

    Lemma: -1 is a square modulo p iff p = 1 (mod 4).

    Our inquiries culminate in the following rather succinct

    Proposition: Let p be a rational prime. Then:
    a. if p = 2, then 2 ramifies, 2 = -i(1+i)<sup>2</sup>, 1+i a Gaussian prime;
    b. if p = 1 (mod 4), then p splits, p = ππ* for some Gaussian prime π; or
    c. if p = 3 (mod 4), then p is inert, p is a Gaussian prime.

    This motivates a third question:

    3. Can we systematically determine the factorization of a rational prime in the ring of integers of a number field?

    Yeah, this is a poorly defined question... but there is a detailed web of conjectures which describes what we believe the truth to be. We have solved some cases--for example, we can answer this question for abelian extensions of the rationals, i.e. extensions whose Galois groups are abelian groups. We almost certainly won't get to that result, although we will probably describe the case of quadratic extensions of the rationals in its entirety.
    Reply With Quote  
     

  11. #10  
    Moderator Moderator
    Join Date
    Jun 2005
    Posts
    1,620
    serpicojr: Let me first thank you for starting this thread. It is a subject I avoided like the plague (whenever I could), but if you have a "passion for it" you should be able to awaken a glimmer in me (and others, I dare say).

    And on a quick skim, I think, just think mind, I may have come partly on board.

    After a trip away, I spent the last 15 min going from start to end; I will have some questions (actually, they are mainly to do with your notation, but some others too).

    Plus I had an interesting thought (inspired by my Birkhoff & Mac Lane A survey of Modern Algebra) that connects your thread to mine.

    But for now, it is Friday night here (UK). Traditionally we go get drunk and then beat our wives.

    And why not?

    Stay tuned for some really, and I mean really, dumb questions
    Reply With Quote  
     

  12. #11  
    Moderator Moderator
    Join Date
    Jun 2005
    Posts
    1,620
    Hi serpicojr! Here are a few dumb questions for starters. I dare say you will be appalled by their dumbness, but there you go.

    So, we have that,say Q[x] is the ring of polynomials in the indeterminate x with rational coefficients.

    You then use the form Q(x). Am I to take this as the field of rational expressions in x?

    Now we have that Q(α) is an extension of the field Q, where α is some number. The notation I have here is rather different, so let me just check this out;

    I have that, if it is the case that the field K is a subfield of the field L, one says that L:K is field extension of K. Further, if it is the case that L = Q(α), then a) L:K is a simple extension of K and b) if α is an algebraic number, it is a simple algebraic extension, otherwise it is a simple transcendental extension. Is this right?

    So, at some point you make the claim that Z[i] is the ring of integers. I don't follow this; Z[i], unless I have seriously misunderstood, is the polynomial ring in the indeterminant i with integer coefficients.

    First, i is not an indeterminant, but even ignoring this, how can I recover a bog-standard integer from this? Or are you including non-real integers here?

    Anyway, let's have some fun, which you did allude to. The theory of linear algebra offers us this fundamental theorem; any vector space V<sub>n</sub> of dimension n over the field K is isomorphic to the field K<sup>n</sup>.

    Then provided only that dim(L:K) = dim(K), (um..do I need this assertion? Not sure) one may treat L:K as a vector space over K. But this is merely to define the forgetful functor U: FldK-Vec, that is the mapping from the category of fields to the category of vector spaces over K on the condition that this mapping forgets about the multiplicative unit in the field axioms.

    What fun!
    Reply With Quote  
     

  13. #12  
    Forum Professor serpicojr's Avatar
    Join Date
    Jul 2007
    Location
    JRZ
    Posts
    1,069
    I felt bad for letting this post fall by the wayside, but I'm glad that it actually gave you the opportunity to read and digest my posts! Let's get to work on these questions.

    Quote Originally Posted by Guitarist
    I dare say you will be appalled by their dumbness, but there you go.
    Rule number 1 of the math forum: there are no dumb questions!

    So, we have that,say Q[x] is the ring of polynomials in the indeterminate x with rational coefficients.

    You then use the form Q(x). Am I to take this as the field of rational expressions in x?
    Precisely.

    I have that, if it is the case that the field K is a subfield of the field L, one says that L:K is field extension of K.
    I'll be using the notation L|K or L/K; I'll use [L:K] to denote the degree of the extension, i.e. [L:K] = dim<sub>K</sub>(L), thinking of L as a K-vector space.

    Further, if it is the case that L = Q(α), then a) L:K is a simple extension of K and b) if α is an algebraic number, it is a simple algebraic extension, otherwise it is a simple transcendental extension. Is this right?
    Indeed. One wonderful fact is that every finite degree algebraic extension of Q is indeed simple (this is the primitive element theorem, I believe), although of course there are nonsimple transcendental extensions, e.g. Q(X,Y), X, Y indeterminates.

    So, at some point you make the claim that Z[i] is the ring of integers.
    It is the ring of algebraic integers in the field Q(i). This is defined to be all elements of Q(i) which satisfy a monic polynomial with integer coefficients. The point here is that an element a+bi of Q(i), a and b rationals, is an algebraic integer iff a and b are rational integers.

    Then provided only that dim(L:K) = dim(K), (um..do I need this assertion? Not sure) one may treat L:K as a vector space over K.
    Indeed, the definition of the degree of an extension is the dimension of L as a vector space over K. I think the only nontrivial thing to prove is that, if L is the root field of an irreducible polynomial f(X) in K[X], i.e. L = K(a) for some root a of f(X), or L = K[X]/(f(X)), then indeed [L:K] = deg(f(X)).

    But this is merely to define the forgetful functor U: FldK-Vec, that is the mapping from the category of fields to the category of vector spaces over K on the condition that this mapping forgets about the multiplicative unit in the field axioms.
    An excellent connection between our two posts!
    Reply With Quote  
     

  14. #13  
    Moderator Moderator
    Join Date
    Jun 2005
    Posts
    1,620
    Ok, here's another elementary question.

    Birkoff & Mac Lane are careful to distinguish between polynomial forms and polynomial functions. Specifically they state that any polynomial form

    a<sub>0</sub> + a<sub>1</sub>x + ....a<sub>n</sub>x<sup>n</sup>

    uniquely determines a polynomial function, and each polynomial function is determined by at least one such form.

    This, as they remark, need not imply a one-to-one and onto map (a bijection) between such forms and such such functions, i.e. they are not isomorphic.

    The example they give is from the quotient group Z<sub>3</sub> = Z/3Z, i.e. the group of integers mod 3, one may have that the forms f(x) = x<sup>3</sup> - x and g(x) = 0 determine the same function i.e the function that is identically zero.

    They then make the astonishing remark, (noting that 3 is prime): "over any Z<sub>p</sub> equality of functions has an effectively different meaning than it does for forms" I like this a lot........

    Just out of interest, do you agree with any of this (leaving aside all appeals to authority)?
    Reply With Quote  
     

  15. #14  
    Forum Ph.D.
    Join Date
    Apr 2008
    Posts
    956
    If you really want to be strict about things …

    Polynomials over a ring R are actually sequences of elements of R. The polynomial

    a<sub>0</sub> + a<sub>1</sub>x + a<sub>2</sub>x<sup>2</sup> + … + a<sub>n</sub>x<sup>n</sup>

    is defined as the sequence (a<sub>0</sub>, a<sub>1</sub>, a<sub>2</sub>, …, a<sub>n</sub>, 0, 0, …) – i.e. (a<sub>n</sub>)<sub>n=0</sub><sup>∞</sup> where a<sub>k</sub> = 0 for k > n.

    On the other hand,

    f(x) = a<sub>0</sub> + a<sub>1</sub>x + a<sub>2</sub>x<sup>2</sup> + … + a<sub>n</sub>x<sup>n</sup>

    is a function f : RR.

    Of course they are two totally different things. :?
    Reply With Quote  
     

  16. #15  
    Forum Professor serpicojr's Avatar
    Join Date
    Jul 2007
    Location
    JRZ
    Posts
    1,069
    Quote Originally Posted by Guitarist
    Just out of interest, do you agree with any of this (leaving aside all appeals to authority)?
    Indeed I do agree with this difference, and I would prefer we think of polynomials as forms (or sequences, as Jane puts it) as opposed to functions. This doesn't matter over infinite fields--a polynomial defines the constant function 0 iff it's the 0 polynomial, as it must have infinitely many roots and, for nonzero polynomials, the number of roots is bounded by the degree. But we'll be seeing that finite fields and their extensions play a very important role in our investigations. If you think of polynomials simply as functions and not as indeterminate algebraic expressions, then you can't talk about roots of your function which lie outside the field you're looking at. A function may have many extensions to an extension field. A polynomial only has one.

    (Note: an abstract algebraic explanation of the difference of the two follows by considering quotients of the ring of polynomials of a finite field. For any q = p<sup>n</sup>, p prime, n ≥ 1, we have that the ideal (X<sup>q</sup>-X) in F<sub>q</sub>[X] is precisely the ideal of polynomials which define the constant function 0. Thus we have that polynomial functions over F<sub>q</sub> are a quotient of polynomial forms over F<sub>q</sub>, i.e. the ring F<sub>q</sub>[X]/(X<sup>q</sup>-X).)

    (Fun stuff: the above discussion is an ingredient in the proof that the polynomial functions over F<sub>q</sub> are precisely the functions p: F<sub>q</sub> -> F<sub>q</sub>. This follows from the Fundamental Theorem of Finitely Generated Modules over a PID.)

    I'll begin discussing the general foundations of algebraic number theory in a short while. I'm trying to introduce concepts in an organic fashion, but I've hit a small snag that may compromise this plan.

    ++++++++++++++++++++

    PS: Oh, and let me know what your background with finite fields is. I just need you to know that, for any prime power q = p<sup>n</sup>, there is a unique (up to isomorphism) finite field of order q, which I denote F<sub>q</sub>. You may obtain this as the splitting field over Z/pZ of the polynomial X<sup>q</sup>-X. For any m ≥ 1, F<sub>q<sup>m</sup></sub> is a simple Galois extension of F<sub>q</sub>, and the Galois group is isomorphic to Z/mZ, with the map from the latter to the former given by sending k (mod m) to the automorphism of F<sub>q<sup>m</sup></sub> given by sending an element a to the element a<sup>q<sup>k</sup></sup>.
    Reply With Quote  
     

  17. #16  
    Moderator Moderator
    Join Date
    Jun 2005
    Posts
    1,620
    Jane and serpicojr thanks for that, it was very helpful.
    Quote Originally Posted by serpicojr
    PS: Oh, and let me know what your background with finite fields is. .
    Then I will tell you, though I am by means means ignoring the rest of your post.

    I suppose that K is a field. I will call the subfield K' as a prime subfield of K iff K' is the smallest intersection of all subfields of K.

    It seeems that it is relatively easy to show that, for any prime subfield F', that either F' is isomorphic to the rational field Q or to the field of primes P.

    In the former case one says that the "parent" field is of characteristic 0, in the latter, of characteristic p.

    So a finite field F is such that F has characteristic p > 0, and the number of elements in F is p<sup>n</sup>, where n is the degree of F over F'

    Then follows something about splitting fields which I need to read up on (I have texts!), which seems to lead to the Galois Field.

    Gimme time, which I am short of just now
    Reply With Quote  
     

  18. #17  
    Forum Professor serpicojr's Avatar
    Join Date
    Jul 2007
    Location
    JRZ
    Posts
    1,069
    Quote Originally Posted by Guitarist
    Then follows something about splitting fields which I need to read up on (I have texts!).
    Splitting fields are just fields which are obtained by adjoining every root of a polynomial or set of polynomials to a field.
    Reply With Quote  
     

  19. #18  
    Moderator Moderator
    Join Date
    Jun 2005
    Posts
    1,620
    Quote Originally Posted by serpicojr
    Splitting fields are just fields which are obtained by adjoining every root of a polynomial or set of polynomials to a field.
    Ha! You say "just" - it took me while to see this. I quote from one one my texts:

    "{Suppose f is a polynomial over K} We construct a splitting field by adjoining to K {those} elements which are to be thought of as the zeros of f ......... so we split K into irreducible factors and work on these separately".

    Being dim, it took me a while to realize that the "zeros" of a polynomial are precisely its roots. For, if g is a set function, and x it's argument, one would usually say that the zeros of g are precisely those x for which g(x) = 0.

    Then given a "polynomial expression" x<sup>2</sup> - 1 = 0, then the zeros/roots of this "polynomial" f are of course ± i.

    May I just add a personal note. You should all think of me as an amateur mathematician, so you should expect huge gaps in my knowledge. This is one of them, well, number theory generally.

    But, as always in life, if one meets someone who is a) passionate about a subject to which one was previously indifferent and b) able to transmit that passion, often (not always!) something rubs off.

    So, I now have a question; I know what an ideal is, of course I do. I now quote again: "For any a in {the commutative ring} R, the set of all multiples ra .. is an ideal, since ra ± sa = (r ± s)a, and s(ra) = (sr)r, when r, s in R.....Such an ideal is called a principal ideal"

    But by my understanding of an ideal, I would expect a in I, r in R, implies ra in I to define I as an ideal. In other words, surely a should be specified as being in the subring I.

    Typo? Unlikely - I am working from a 3rd edition.

    Or is it merely to state the bleeding obvious? Er .. like, every set is a subset of itself, every group is a normal subgroup of itself etc?

    P.S On the brief reflection I should given this earlier (Doh!), this last para. is right, I am sure. Sorry to trouble you guys......
    Reply With Quote  
     

  20. #19  
    Forum Professor serpicojr's Avatar
    Join Date
    Jul 2007
    Location
    JRZ
    Posts
    1,069
    Quote Originally Posted by Guitarist
    Quote Originally Posted by serpicojr
    Splitting fields are just fields which are obtained by adjoining every root of a polynomial or set of polynomials to a field.
    Ha! You say "just" - it took me while to see this.
    Isn't what I said the definition of a splitting field?

    So, I now have a question; I know what an ideal is, of course I do. I now quote again: "For any a in {the commutative ring} R, the set of all multiples ra .. is an ideal, since ra ± sa = (r ± s)a, and s(ra) = (sr)r, when r, s in R.....Such an ideal is called a principal ideal"
    That should read s(ra) = (sr)a.

    But by my understanding of an ideal, I would expect a in I, r in R, implies ra in I to define I as an ideal. In other words, surely a should be specified as being in the subring I.
    And this is what's shown above. In the closure of addition axiom, ra and sa are arbitrary elements of the ideal (a) (or aR or whatever you want to call it). In the multiplication axiom, ra is an arbitrary element of the ideal (a), and s is an element of R.
    Reply With Quote  
     

  21. #20  
    Forum Ph.D.
    Join Date
    Apr 2008
    Posts
    956
    Quote Originally Posted by Guitarist
    So, I now have a question; I know what an ideal is, of course I do. I now quote again: "For any a in {the commutative ring} R, the set of all multiples ra .. is an ideal, since ra ± sa = (r ± s)a, and s(ra) = (sr)r, when r, s in R.....Such an ideal is called a principal ideal"

    But by my understanding of an ideal, I would expect a in I, r in R, implies ra in I to define I as an ideal. In other words, surely a should be specified as being in the subring I.
    No. The definition of an ideal is this. A subset I of a commutative ring R is an ideal of R iff I is an additive subgroup of R and for every aI, rR, raI. From this definition, it’s clear that for any element aI, the set (a) = {ra : rR} is a subset of I. (For example, 2 is an ideal of , 8 ∊ 2 and (8) = 8 ⊆ 2.) If (a) = I, then I is called a principal ideal, generated by a. (Example: 2 = (2) is a principal ideal generated by 2.)

    An integral domain in which every ideal is principal is called a principal-ideal domain (PID). is a PID because every ideal of is of the form n = (n) for some integer n.
    Reply With Quote  
     

  22. #21  
    Forum Professor serpicojr's Avatar
    Join Date
    Jul 2007
    Location
    JRZ
    Posts
    1,069
    Let's keep it rockin' nonstop.

    ------------------------------------

    So we'd like to extend the notion of "factorization" to a number field K, i.e. a finite (and hence algebraic) extension of the rationals. We've seen so far that a satisfactory notion of factorization requires basically two things: we need a set of "prime objects" into which we can factor elements, and we need to understand the set of elements which have trivial factorization, i.e. the units.

    We are forced to consider a subring R of K in order to define these notions. Indeed, K has no nontrivial ideals, and every nonzero element of K is a unit, and so the notion of factorization which comes from K itself is be trivial. In the cases we have seen so far, we see that the usual integers and the Gaussian integers are subrings which give us good notions of prime and unit.

    Indeed, in the cases we've seen so far, there is a good notion of "prime element", but we'll see that this fails in general. However, we will develop the arithmetic of prime ideals in rings called Dedekind domains, which R will be, and we'll see that this provides us with an excellent means of generalizing factorization. Indeed, we will be able to factor all ideals into products of prime ideals. In particular, we can factor an element by looking at the factorization of the ideal it generates.

    The notion of unit will be the same as before--namely, the units will be the elements of R whose inverse also lies in R. So, for example, ±1 are always units. It's important for us to understand units: two elements which generate the same ideal in R differ by a unit, i.e. their quotient is a unit. Thus their prime factorizations will be the same. Units then give us a way of distinguishing elements which have the same factorization.

    We want a few more properties to hold. First, we want to be able to extend a notion of factorization in R to one in K. We'll do so by extending factorization multiplicatively, and so we need R to have field of fractions equal to K. In other words, every element of K should be expressible as a ratio r/s, where r and s are elements of R.

    Next, we want R to respect the factorization of elements in Z. All this really means is that nonunits in Z should remain nonunits in R. So this immediately implies that we require that Q intersect R should be equal to Z. Furthermore, let's assume for a section that K is Galois over the rationals, so that any irreducible polynomial over the rationals which has a root in K must factor completely over K. If r is an element of R satisfying some minimal polynomial f(X), then all other roots of f(X) should also be in R, as the roots of f(X) are indistinguishable from one another in terms of their algebraic properties with respect to Q. Assuming f(X) to be monic, the coefficients of f(X) are symmetric polynomials in the roots and hence must lie in R. But they're rational, and so they must be in Z. Thus we want to assume that all elements of R are algebraic integers, i.e. we want all elements of R to satisfy a monic polynomial with integer coefficients.

    This motivates the following definitions: let K be any field, let R be a subring of K, let L be a finite field extension of K, and let S be a subring of L. We say that an element x of L is integral over R if there is a monic polynomial f(X) in R[X] such that f(x) = 0. We say that S is integral over R if every element of S is integral over R. Finally, we say that R is integrally closed in K if every element of K that is integral over R is actually in R.

    Next time we prove the

    Theorem: Let K, R, and L be as above, and let x be an element of L. Then the following are equivalent:

    1. x is integral over R;

    2. x preserves a finitely generated R-module M contained in L.
    Reply With Quote  
     

  23. #22  
    Moderator Moderator
    Join Date
    Jun 2005
    Posts
    1,620
    First Jane I thank you; I think my text was a little ambiguous, or rather it probably was't written for dim-wits. All is now as I would have expected.

    Now this. serpicojr, all is well enough to here, though it will require at least a second reading. But now a question

    Quote Originally Posted by serpicojr
    This motivates the following definitions: let K be any field, let R be a subring of K, let L be a finite field extension of K, and let S be a subring of L. We say that an element x of L is integral over R if there is a monic polynomial f(X) in R[X] such that f(x) = 0.
    So, if f(X) is a monic poly, in R[X], and x in L, wtf is the difference between f(X) and f(x).

    When I see R[X], I see a field of polys with indeterminate X and real coefficients. You seem to see something different

    Or to put it another way; if x in L, what is X, where does it live? It would be rather unlike you to be sloppy with notation, but are you suggesting that X and x are quite unrelated?

    Anyway, yeah keep it coming, but remember, I am a bit slow, so small steps (for me at least), please
    Reply With Quote  
     

  24. #23  
    Forum Professor serpicojr's Avatar
    Join Date
    Jul 2007
    Location
    JRZ
    Posts
    1,069
    I'm using X as an indeterminate, x as a specific element of a field. So X is transcendental over whatever I'm adjoining it to. R[X] is the ring of polynomials with coefficients in R, where R is just a ring. I'll let R be the reals. f(X) is an element of this ring. If x is an element of L, f(x) is the polynomial f(X) evaluated at x. If f(X) is a monic polynomial, then it cannot be equal to 0, whereas since x is algebraic over K, f(x) = 0 makes sense even if f(X) ≠ 0--this gives us an R-linear relationship between the powers of x.
    Reply With Quote  
     

  25. #24  
    Moderator Moderator
    Join Date
    Jun 2005
    Posts
    1,620
    serpicojr: I thank you for that. Let me just say I received from Amazon this morning Ian Stewart's little book Galois Theory, which looks excellent (I am a huge fan of his - he is a very articulate contributor to science discussions on the BBC).

    I am devouring this book as fast as I can, and I think I may be almost up to speed by this time tomorrow - we'll see.

    Don't go away, and don't jump ahead (unless of course others want you to),

    Despite myself, I am starting to find this subject rather fun.....
    Reply With Quote  
     

  26. #25  
    Forum Professor serpicojr's Avatar
    Join Date
    Jul 2007
    Location
    JRZ
    Posts
    1,069
    Let me backtrack the slightest bit. I can prove the above theorem in general, but this is not necessary because the rings we're dealing with will always satisfy one extra nice property: all of our rings will be Noetherian. Recall that a (commutative) ring (with identity) is Noetherian iff any of the following criteria hold:

    -R satisfies the ascending chain condition on ideals: for any chain



    there exists an integer N so that I<sub>n</sub> = I<sub>m</sub> for all m, n ≥ N. In other words, the chain stabilizes, i.e. it's eventually constant.

    -Every nonempty set of ideals of R contains a maximal element with respect to set inclusion.

    -Every ideal of R is finitely generated.

    Exercise: Show directly that Z satisfies the ascending chain condition and is hence Noetherian.

    Exercise: Show that the ascending chain condition on ideals implies all ideals are finitely generated. (Hint: given an ideal I, take an increasing sequence of finitely generated subideals of I.)

    Every finitely generated R-module M is also Noetherian, where a module is Noetherian if any one of the above conditions holds with "submodule" replacing the word "ideal". So, for example, if R is Noetherian and I is an ideal of R, then R/I is Noetherian as an R-module. This implies that R/I is Noetherian as a ring, as any ideal of R/I is an R-module.

    Exercise: Fill in the details in the last two sentences.

    One of the most important results in Noetherian rings is the Hilbert Basis Theorem: if R is a Noetherian ring, then so is R[X], the ring of polynomials in X with coefficients in R. (Note: R[X] is not a Noetherian R-module, as it is not finitely generated as an R-module.)

    If you haven't seen this before, this is a whole lot to digest. If you'd like me to go over this material in more detail, let me know. In any case, we now have an easy proof of the theorem from last time.

    Theorem: Let K, R, and L be as above, and let x be an element of L. Then the following are equivalent:

    1. x is integral over R;

    2. x preserves a finitely generated R-module M contained in L.

    <u>Proof</u>: (1)=>(2) Suppose x is integral over R. Then R[x] is a finitely generated R-module. Indeed, if x satisfies the polynomial



    Then we have





    So x<sup>n</sup> is an R-linear combination of x<sup>k</sup>, 0 ≤ k ≤ n-1. And easy inductive argument shows that then, indeed, every x<sup>m</sup>, m ≥ n, is an R-linear combination of x<sup>k</sup>, 0 ≤ k ≤ n-1. Thus the x<sup>k</sup>, 0 ≤ k ≤ n-1, generate R[x].

    (2)=>(1) Conversely, suppose x preserves a finitely generated R-module M contained in L. We may assume that 1 is in M by replacing M by m<sup>-1</sup>M for some nonzero element m of M. Then it's clear that R[x] is a submodule of M. But then R[x] is finitely generated. If R[x] is generated by polynomials in x of degree less than or equal to d, then R[x] is generated by the monomials x<sup>k</sup>, 0 ≤ k ≤ d. But then x<sup>d+1</sup> is an R-linear combination of the x<sup>k</sup>, 0 ≤ k ≤ d. Thus





    Letting



    We have that x satisfies the monic polynomial g(X) in R[X] and is hence integral over R. ☐
    Reply With Quote  
     

  27. #26  
    Moderator Moderator
    Join Date
    Jun 2005
    Posts
    1,620
    Quote Originally Posted by serpicojr
    Exercise: Show directly that Z satisfies the ascending chain condition and is hence Noetherian.
    I am not sure if this is quite what you want, but I'll risk making a monkey of myself anyway.

    I notice first that the ideals in are of the form for any . This follows the definition;

    the subring is an ideal iff. for all .

    I notice that, again by definition, for all that . But this simply means that for any . Likewise, for any also

    So as , provided only that all intermediate values are in , the chain stabilizes on
    Reply With Quote  
     

  28. #27  
    Forum Professor serpicojr's Avatar
    Join Date
    Jul 2007
    Location
    JRZ
    Posts
    1,069
    Guitarist: First, I am going to steal the convention of putting people's names in bold when addressing them--hope you don't mind! Second, to show that the ascending chain condition (ACC) holds, you have to show that any ascending chain of ideals stabilizes. So for , you have to show that any chain stabilizes. You make the correct observation (both in the sense that it's true and that it's the right idea for the proof) that is the same as , so a chain of ideals implies .
    Reply With Quote  
     

  29. #28  
    Forum Ph.D.
    Join Date
    Apr 2008
    Posts
    956
    Quote Originally Posted by Guitarist
    I notice that, again by definition, for all that . But this simply means that for any .
    That’s right, but I think the converse is more relevant here:

    So any chain must be such that etc. Hence it’s clear that the chain will stabilize at any prime divisor of .

    serpicojr didn’t say that the ideals in a chain have to be proper, but I think they have to be proper – otherwise the Noetherian condition would be totally trivial. :?
    Reply With Quote  
     

  30. #29  
    Forum Professor serpicojr's Avatar
    Join Date
    Jul 2007
    Location
    JRZ
    Posts
    1,069
    JaneBennet: First, what if the chain does not contain an ideal generated by a prime? Also, if the condition holds in the trivial case, then why exclude it from the definition?
    Reply With Quote  
     

  31. #30  
    Forum Ph.D.
    Join Date
    Apr 2008
    Posts
    956
    Well, isn’t it the case that in any ring R, R itself is an ideal? In that case, wouldn’t we have that (unless we only consider proper ideals) any chain in R will stabilize (i.e. at R)? Which would mean that any (commutative) ring (with unity) satisfies the ACC?

    Either that, or I must have misinterpreted your post.
    Reply With Quote  
     

  32. #31  
    Forum Professor serpicojr's Avatar
    Join Date
    Jul 2007
    Location
    JRZ
    Posts
    1,069
    Quote Originally Posted by JaneBennet
    Well, isn’t it the case that in any ring R, R itself is an ideal?
    True.

    [quote[In that case, wouldn’t we have that (unless we only consider proper ideals) any chain in R will stabilize (i.e. at R)?[/quote]

    Only if is in the chain.

    Which would mean that any (commutative) ring (with unity) satisfies the ACC?
    Nope--Ms. Noether wouldn't be so famous if the definition most famously associated with her were trivial. Consider, for example, the following: let be an infinite set, let be a ring (always commutative w/ identity), and let be the ring of polynomials with coefficients in and indeterminates in . Let be a sequence of distinct elements of . Then the chain of ideals



    does not stabilize--every containment is a proper containment.

    I just realized what your misunderstanding is. Reread the definition of the ACC, and ignore the "stabilize" aspect of the definition. Instead, focus on the original definition I give, which says that, for some , all ideals for are the same.
    Reply With Quote  
     

  33. #32  
    Forum Ph.D.
    Join Date
    Apr 2008
    Posts
    956
    Now I know where I went wrong. I misinterpreted for all chain as there exists a chain.

    I get it now. Thanks!
    Reply With Quote  
     

  34. #33  
    Forum Professor serpicojr's Avatar
    Join Date
    Jul 2007
    Location
    JRZ
    Posts
    1,069
    So I feel like I may have lost you guys at the Noetherian rings. I'd be happy to do a little mini-lesson on Noetherian rings, as they'll be playing an important role in our development of rings of integers.
    Reply With Quote  
     

  35. #34  
    Forum Ph.D.
    Join Date
    Apr 2008
    Posts
    956
    Quote Originally Posted by serpicojr
    I'd be happy to do a little mini-lesson on Noetherian rings, as they'll be playing an important role in our development of rings of integers.
    That would be great! :-D I have to admit that the only thing I still know about Emmy Noether is that she was a woman.
    Reply With Quote  
     

  36. #35  
    Suspended
    Join Date
    Apr 2008
    Posts
    2,176
    Quote Originally Posted by serpicojr
    So I feel like I may have lost you guys at the Noetherian rings. I'd be happy to do a little mini-lesson on Noetherian rings, as they'll be playing an important role in our development of rings of integers.
    You lost me at (i) equals integer.

    I need a key or a legend or something to figure it out. I use so many different symbols for so many different things in real life that I do not see integer when I see (i). Nor do I recognize any of the other symbols for that matter.

    Sincerely,


    William McCormick
    Reply With Quote  
     

  37. #36  
    Forum Professor serpicojr's Avatar
    Join Date
    Jul 2007
    Location
    JRZ
    Posts
    1,069
    Sorry baby, I'm assuming you have a lot of abstract algebra under your belt, which you definitely don't, and it'd take you a while to get up to speed.
    Reply With Quote  
     

  38. #37  
    Suspended
    Join Date
    Apr 2008
    Posts
    2,176
    Quote Originally Posted by serpicojr
    Sorry baby, I'm assuming you have a lot of abstract algebra under your belt, which you definitely don't, and it'd take you a while to get up to speed.

    I actually pick things up rather quickly. However I need to understand what the codes or terminology means.
    I work across twenty different fields. That is why you will see I often include in parenthesis the whole term or an explanation. Because without the explanation or definition. Only a few will understand rather then many.

    I thought that with your understanding of math, it would be easy for you to include some key or legend that could be accessed by anyone at any level.



    Sincerely,


    William McCormick
    Reply With Quote  
     

  39. #38  
    Forum Professor serpicojr's Avatar
    Join Date
    Jul 2007
    Location
    JRZ
    Posts
    1,069
    I don't take that diss lightly, but I would be happy to oblige you. Why don't you start by reading the excellent thread on set theory that Guitarist has been running?
    Reply With Quote  
     

  40. #39  
    Suspended
    Join Date
    Apr 2008
    Posts
    2,176
    Quote Originally Posted by serpicojr
    I don't take that diss lightly, but I would be happy to oblige you. Why don't you start by reading the excellent thread on set theory that Guitarist has been running?
    I do not even want an explanation of what it is doing. I just would like to know what is being calculated.


    Sincerely,


    William McCormick
    Reply With Quote  
     

  41. #40  
    Forum Professor serpicojr's Avatar
    Join Date
    Jul 2007
    Location
    JRZ
    Posts
    1,069
    We are studying algebraic numbers, i.e. roots of polynomials with rational coefficients. We are trying to extend the notion of "prime factorization" to algebraic numbers at the moment. The first thing I did was to work out an example, namely the case when you consider the roots of the polynomial , i.e. . So what is being calculated? Eventually, some sort of prime factorization of algebraic numbers. But we're not there yet.

    If you really want to understand what all the symbols and words mean, it's not enough for me to provide you with some sort of legend. No mathematician of any level would be able to make what I've presented so far accessible to people of all levels without destroying the goal for which I'm aiming: a rigorous, detailed introduction to algebraic number theory. If you want a deeper explanation than that which I gave above, you have to put time and effort into understanding the math that leads up to it. Of course, we are willing and ready to help. You can either start by reading the set theory thread, or the explanation I gave you above will have to suffice.
    Reply With Quote  
     

  42. #41 Re: Algebraic Number Theory 
    Forum Freshman
    Join Date
    Oct 2007
    Posts
    57
    Quote Originally Posted by serpicojr
    (...)

    Now let me develop some of the basic questions we'll be asking. An algebraic number is a number which satisfies a polynomial with coefficients in Q, the rationals. 2<sup>1/2</sup> and i are some famous algebraic numbers, the former satisfying x<sup>2</sup>-2, the latter x<sup>2</sup>+1.

    Any algebraic number α (that's supposed to be an alpha) determines a finite field extension of Q, the field K = Q(α). In general, we'll call a finite extension of Q a number field. I'm assuming you've seen this before if you're reading this, but let's recall that this ring is the same thing as:
    (...)
    I'm probably going to ask a very stupid question, but I already got confused by the very first definition of algebraic numbers.

    Why do algebraic numbers only satisfy coefficients in , when the examples mentioned actually have coefficients in (like ) and (like i)? (<- my first TeX! :-D )

    ETA:
    Ignore my question! Just saw that I mixed up coefficients and roots !
    Reply With Quote  
     

  43. #42  
    Forum Ph.D.
    Join Date
    Apr 2008
    Posts
    956
    Quote Originally Posted by serpicojr
    Let me start with an example. Let let's K = Q(i). I claim the ring of integers is Z[i]. Indeed, to see this, suppose z = x+iy is an algebraic integer. If y = 0, then x is a rational number, and so x must be a rational integer. (Exercise 1: prove this!)
    So we wanna prove that if satisfies a polynomial of the form , and , then must be an integer, right?

    Let , where , and .

    Then



    .

    Now and ; hence .

    Since , it follows that . Thus .
    Reply With Quote  
     

  44. #43  
    Forum Freshman Faldo_Elrith's Avatar
    Join Date
    Jul 2008
    Posts
    76
    Where is Exercise 2?
    Reply With Quote  
     

  45. #44  
    Forum Ph.D.
    Join Date
    Apr 2008
    Posts
    956
    Quote Originally Posted by serpicojr
    We can now completely determine the units of Z[i]. I leave the determination of this finite set as Exercise 2.
    Thats Exercise 2.
    Reply With Quote  
     

  46. #45  
    Suspended
    Join Date
    Apr 2008
    Posts
    2,176
    Quote Originally Posted by serpicojr
    We are studying algebraic numbers, i.e. roots of polynomials with rational coefficients. We are trying to extend the notion of "prime factorization" to algebraic numbers at the moment. The first thing I did was to work out an example, namely the case when you consider the roots of the polynomial , i.e. . So what is being calculated? Eventually, some sort of prime factorization of algebraic numbers. But we're not there yet.

    If you really want to understand what all the symbols and words mean, it's not enough for me to provide you with some sort of legend. No mathematician of any level would be able to make what I've presented so far accessible to people of all levels without destroying the goal for which I'm aiming: a rigorous, detailed introduction to algebraic number theory. If you want a deeper explanation than that which I gave above, you have to put time and effort into understanding the math that leads up to it. Of course, we are willing and ready to help. You can either start by reading the set theory thread, or the explanation I gave you above will have to suffice.
    I will take a legend if you do not mind.


    Sincerely,


    William McCormick
    Reply With Quote  
     

  47. #46  
    Forum Freshman Faldo_Elrith's Avatar
    Join Date
    Jul 2008
    Posts
    76
    Quote Originally Posted by serpicojr
    Let's introduce one right now. The norm of a Gaussian integer z = x+iy is the nonnegative real number N(z) = (x+iy)(x-iy) = x<sup>2</sup>+y<sup>2</sup>. This is the square of the usual complex modulus, and we've already seen this number as the constant term in the minimal polynomial for z. Note that N is multiplicative--letting * denote complex conjugation, we have:

    N(zw) = (zw)(zw)* = zz*ww* = N(z)N(w)

    We can immediately use the norm to tell us about the units of Z[i]. Recall that an element u of a (commutative) ring (with identity) R is a unit it has a multiplicative inverse in R. The following is a specific instance of a more general fact.

    Theorem: u is a Gaussian unit iff N(u) = 1.

    <u>Proof</u>: If u is a Gaussian unit, then u<sup>-1</sup> is also a Gaussian integer. So N(u) and N(u<sup>-1</sup>) must be nonnegative integers. But N(u)N(u<sup>-1</sup>) = N(uu<sup>-1</sup>) = N(1) = 1, and so we must have N(u) = 1.

    Conversely, if N(u) = 1, then if we let * denote complex conjugation, we have uu* = 1. u* = u<sup>-1</sup>, and since u is in Z[i], we must have u* is, too. Thus u is a unit. ☐

    We can now completely determine the units of Z[i]. I leave the determination of this finite set as Exercise 2.
    So the Gaussian units are ?
    Reply With Quote  
     

  48. #47  
    Forum Ph.D.
    Join Date
    Apr 2008
    Posts
    956
    Quote Originally Posted by Faldo_Elrith
    So the Gaussian units are ?
    Not quite. A Gaussian integer must have integer real and imaginary parts. Hence the set of all Gaussian units is .
    Reply With Quote  
     

Bookmarks
Bookmarks
Posting Permissions
  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •