Notices
Results 1 to 17 of 17

Thread: A New Way to Determine Prmes?

  1. #1 A New Way to Determine Prmes? 
    Forum Masters Degree
    Join Date
    Dec 2008
    Posts
    625
    Allow me to say that I am not sure if this has been explored previously. Let's say we have a number , and we have to determine if the number is prime. We could either try and divide it with all the prime numbers known to us, or, as I realised some time ago, we could write it as the reult of an algebraic expression.

    For example, let be the difference of two squares, and . i.e.



    It should be instantly clear that if is not equal to , then any positive number that can be expressed as such a difference is composite. If the difference between the two is one, then it appears to me that the only way to determine its nature is through brute prime factorisation.

    Note: .

    I generalised this statement for algebraic expressions like , . It is even easier to prove that any number expressible as is always composite, no matter the difference between and .

    My reason for explaining all this is because, after trying unsuccessfully to try and find a suitable identity for , it strikes me that there are some expressions which cannot be resolved into algebraic identities, and hence there will always be some numbers for whom the only method of determining their nature is brute prime factorisation.

    My question, then, is this: why are there some expressions which cannot be broken up into multiplications of two or more factors? Is there a general theory that allows us to determine if a given expression can have Diophantine expansion (my term - if there is another, more proper name, I would be delighted to know of it) i.e. where and are both whole numbers? If there is not, is there any related work on this subject which I may peruse?

    Thanks.


    In control lies inordinate freedom; in freedom lies inordinate control.
    Reply With Quote  
     

  2.  
     

  3. #2 Re: A New Way to Determine Prmes? 
    . DrRocket's Avatar
    Join Date
    Aug 2008
    Posts
    5,486
    Quote Originally Posted by Liongold
    Allow me to say that I am not sure if this has been explored previously. Let's say we have a number , and we have to determine if the number is prime. We could either try and divide it with all the prime numbers known to us, or, as I realised some time ago, we could write it as the reult of an algebraic expression.

    For example, let be the difference of two squares, and . i.e.



    It should be instantly clear that if is not equal to , then any positive number that can be expressed as such a difference is composite. If the difference between the two is one, then it appears to me that the only way to determine its nature is through brute prime factorisation.

    Note: .

    I generalised this statement for algebraic expressions like , . It is even easier to prove that any number expressible as is always composite, no matter the difference between and .

    My reason for explaining all this is because, after trying unsuccessfully to try and find a suitable identity for , it strikes me that there are some expressions which cannot be resolved into algebraic identities, and hence there will always be some numbers for whom the only method of determining their nature is brute prime factorisation.

    My question, then, is this: why are there some expressions which cannot be broken up into multiplications of two or more factors? Is there a general theory that allows us to determine if a given expression can have Diophantine expansion (my term - if there is another, more proper name, I would be delighted to know of it) i.e. where and are both whole numbers? If there is not, is there any related work on this subject which I may peruse?

    Thanks.
    If you could always write a polynomial as a product of two polynoimials of lower degree than you could by continued application always write a polynomial as a product of polynomials of degree 1, thereby explicitly fiinding the roots of the original polynomial.

    There are two theorems that apply here. The Fundamental Theorem of Algebra, shows that any polynomial with complex coefficients is factorizable into degree 1 polynomials. However, the theorem is an existence theorem and does not tell you how to find those polynomials in the general case.

    The second theorem comes from Galois theory and it tells us that there is no solution "by radicals" for finding the roots of a polynomial of degree 5 or higher.


    Reply With Quote  
     

  4. #3  
    Forum Masters Degree
    Join Date
    Dec 2008
    Posts
    625
    Thank you for your reply, DrRocket.

    I must confess I am aware of both these theorems, although indirectly: I knew Gauss provided the first proper proof for the fundamental theorem of algebra (although I have yet to read it), and Galois' proof was mentioned in a popular exposition on mathematicians in general by E.T. Bell. Again, I have yet to read the proof; I fear I will have to learn more about group theory before I am competent enough to understand it.

    However, I am still somewhat dissatisfied. I can understand that finding all the roots of a quintic polynomial or one of higher degree is impossible, but then what prevents the existence of, say, as, to use your terminology, the product of two polynomials of degree 1? Neither term is quintic, as can be readily seen; surely there must be a more fundamental reason why certain expressions, where the highest degree is < 5, cannot be factorised?

    I await your reply with great interest.
    In control lies inordinate freedom; in freedom lies inordinate control.
    Reply With Quote  
     

  5. #4  
    . DrRocket's Avatar
    Join Date
    Aug 2008
    Posts
    5,486
    Quote Originally Posted by Liongold
    Thank you for your reply, DrRocket.

    I must confess I am aware of both these theorems, although indirectly: I knew Gauss provided the first proper proof for the fundamental theorem of algebra (although I have yet to read it), and Galois' proof was mentioned in a popular exposition on mathematicians in general by E.T. Bell. Again, I have yet to read the proof; I fear I will have to learn more about group theory before I am competent enough to understand it.

    However, I am still somewhat dissatisfied. I can understand that finding all the roots of a quintic polynomial or one of higher degree is impossible, but then what prevents the existence of, say, as, to use your terminology, the product of two polynomials of degree 1? Neither term is quintic, as can be readily seen; surely there must be a more fundamental reason why certain expressions, where the highest degree is < 5, cannot be factorised?

    I await your reply with great interest.
    Reply With Quote  
     

  6. #5  
    Forum Masters Degree
    Join Date
    Dec 2008
    Posts
    625
    Ah. I see.
    In control lies inordinate freedom; in freedom lies inordinate control.
    Reply With Quote  
     

  7. #6  
    Guest
    Quintic formulas do not exist (this has been proven), but, there are cubic and quartic formulas. For example, the cubic formula for , , is , where , , and .

    However, this formula as a whole is far too complicated; and really, because of Newton's method, we don't need it. Most people that viewed the formulas would agree; calculus makes the solution much more efficient in calculation (where is the solution):




    Reply With Quote  
     

  8. #7  
    . DrRocket's Avatar
    Join Date
    Aug 2008
    Posts
    5,486
    Quote Originally Posted by Ellatha
    Quintic formulas do not exist (this has been proven), but, there are cubic and quartic formulas. For example, the cubic formula for , , is , where , , and .

    However, this formula as a whole is far too complicated; and really, because of Newton's method, we don't need it. Most people that viewed the formulas would agree; calculus makes the solution much more efficient in calculation (where is the solution):




    You have missed the point.

    Newton's method does not provide a solution for cubic polynomials, or anything else, except in a non-constructive sense as a limit of a sequence. , tex]x_n[/tex] in your terminology is NOT a solution, but only the nth term in a sequence the limit of which is a solution.

    The point of the formulas for cubic and quartic polynomials is the demonstration that they are solvable in closed form by radicals. One conequence of Galois theory is that quintic and higher order polynomials are not in general solvable by any such method.

    The fundamental theory of algebra shows that solutions exist. Galois theory tell you that you can't find an expression for them in terms of radicals. That is deep mathematics.

    Newton's method, and other numberical methods are simply approximations the legitimacy of which flows from the fundamental theory of algebra, which gives one "hunting license" to go looking for solutions, knowing that solutions exist. But they only yield approximations, and not the actual solution. These methods are not so deep.
    Reply With Quote  
     

  9. #8  
    Guest
    Quote Originally Posted by DrRocket
    Quote Originally Posted by Ellatha
    Quintic formulas do not exist (this has been proven), but, there are cubic and quartic formulas. For example, the cubic formula for , , is , where , , and .

    However, this formula as a whole is far too complicated; and really, because of Newton's method, we don't need it. Most people that viewed the formulas would agree; calculus makes the solution much more efficient in calculation (where is the solution):




    You have missed the point.

    Newton's method does not provide a solution for cubic polynomials, or anything else, except in a non-constructive sense as a limit of a sequence. , tex]x_n[/tex] in your terminology is NOT a solution, but only the nth term in a sequence the limit of which is a solution.

    The point of the formulas for cubic and quartic polynomials is the demonstration that they are solvable in closed form by radicals. One conequence of Galois theory is that quintic and higher order polynomials are not in general solvable by any such method.

    The fundamental theory of algebra shows that solutions exist. Galois theory tell you that you can't find an expression for them in terms of radicals. That is deep mathematics.

    Newton's method, and other numberical methods are simply approximations the legitimacy of which flows from the fundamental theory of algebra, which gives one "hunting license" to go looking for solutions, knowing that solutions exist. But they only yield approximations, and not the actual solution. These methods are not so deep.
    I don't see what the problem is with the approximations Newton's method provides. It is fantastic, and in most cases the error value rapidly becomes less than anything the calculator can do better than.

    Consider the the function . It has a derivative, . Let us start as (-2). By the third computation alone, one has a solution that has an error well under 0.5. By the fourth computation the error is 8.7 x 10^(-4). By the fifth computation, the value derived, -1.53273159, has an error of 1.01837896 x 10^(-7). This is as good a solution as seemingly anybody will need.
    Reply With Quote  
     

  10. #9  
    Forum Radioactive Isotope MagiMaster's Avatar
    Join Date
    Jul 2006
    Posts
    3,440
    The basic point is that 1.414213562373095048801688724209(7) is not a solution of , no matter how many digits you know. , on the other hand, is.
    Reply With Quote  
     

  11. #10  
    Guest
    A more important point is that 1.4142135623730950488016887242097 is as good as an estimate as you will ever need for . Whether you're designing a rocket or predicting the weather: Newton's method works brilliantly. If any error value no matter how negligible undermined approximations to functions or unknown quantities than we might as well throw out Newton's method, the power series, linear, quadratic, and cubic approximations of functions, and many other important techniques in calculus.
    Reply With Quote  
     

  12. #11  
    . DrRocket's Avatar
    Join Date
    Aug 2008
    Posts
    5,486
    Quote Originally Posted by Ellatha
    A more important point is that 1.4142135623730950488016887242097 is as good as an estimate as you will ever need for . Whether you're designing a rocket or predicting the weather: Newton's method works brilliantly. If any error value no matter how negligible undermined approximations to functions or unknown quantities than we might as well throw out Newton's method, the power series, linear, quadratic, and cubic approximations of functions, and many other important techniques in calculus.
    A valid point for engineering.

    Utterly irrelevant for mathematics.

    The issue at hand is a mathematics question.

    Power series have uses that are quite a bit more profound than approximating functions. Look at the theory of one complex variable.

    Linear approximations to functions is called differential calculus. I don't think I will throw that our just yet.

    Quadratic and cubic approximations for functions are usually based on limited empirical models, adopted for convenience in restricted applications, and have essentially nothing to do with the question at hand.

    Newtons' method works pretty well for some functions, but can also fail for others. I works well near the zero of a polynomial, but can fail badly for a "bad guess". There are other numerical methods besides Newton's method, whifch is usually taught to calculus students and then forgotten. It is not particularly important in the overall scheme of things.

    BTW you approsimation to is probably adequate for designing rockets. (I have been involved in the design of quite a few, How about you?)
    It is not adequate for extremely long range weather forecasts, which are notoriously poor due to sensitivity to initial conditions, which also translates to sensitiivity to round-off error in numberical simulations.
    Reply With Quote  
     

  13. #12  
    Guest
    Quote Originally Posted by DrRocket
    Utterly irrelevant for mathematics.
    If it were utterly irrelevant to mathematics than Newton wouldn't have wasted his time writing a paper on it, nor would have Raphson.

    Quote Originally Posted by DrRocket
    Power series have uses that are quite a bit more profound than approximating functions. Look at the theory of one complex variable.
    Yes they do; the Taylor series is a special type of power series, for example.

    Quote Originally Posted by DrRocket
    Linear approximations to functions is called differential calculus. I don't think I will throw that our just yet.
    An interpretation of the derivative is a local linear approximation to a function, but a function can be approximated at by the formula .

    A simple proof of this:








    Quote Originally Posted by DrRocket
    Quadratic and cubic approximations for functions are usually based on limited empirical models, adopted for convenience in restricted applications, and have essentially nothing to do with the question at hand.
    Quadratic approximations have a similar formula, that is .

    Consider the function , than , and , and let's approximate it at x near 0. Than, plugging these values into the quadratic approximation, we find that . Therefore, our formula for the quadratic approximation is correct, since a quadratic approximation of a quadratic equation should yield that very quadratic equation, and we have just checked this for all quadratics, since we used the formula and indeed it does work out.

    Quote Originally Posted by DrRocket
    Newtons' method works pretty well for some functions, but can also fail for others. I works well near the zero of a polynomial, but can fail badly for a "bad guess". There are other numerical methods besides Newton's method, whifch is usually taught to calculus students and then forgotten. It is not particularly important in the overall scheme of things.
    I'd say that it works far better than pretty well. You are correct in saying that the base point has to be somewhat near the root, but I don't believe that this is really that great of a problem. We can use the intermediate value theorem to check if there is a root (or more specifically Bolzano's theorem) and it also works pretty well for determining x values near the root.

    Quote Originally Posted by DrRocket
    BTW you approsimation to is probably adequate for designing rockets. (I have been involved in the design of quite a few, How about you?)
    It is not adequate for extremely long range weather forecasts, which are notoriously poor due to sensitivity to initial conditions, which also translates to sensitiivity to round-off error in numberical simulations.
    I built a toy rocket when I was younger using clay, but other than that I have no such experience. We use differential calculus often when predicting the weather; for example, where T = temperature, than dT/dx = temperature gradient. An example from biology would be this: where N = population size, than dN/dt = instantaneous rate of population growth.
    Reply With Quote  
     

  14. #13  
    . DrRocket's Avatar
    Join Date
    Aug 2008
    Posts
    5,486
    Quote Originally Posted by Ellatha
    Quote Originally Posted by DrRocket
    Utterly irrelevant for mathematics.
    If it were utterly irrelevant to mathematics than Newton wouldn't have wasted his time writing a paper on it, nor would have Raphson.
    .
    You apparentely did not read what I said. The approximation to sqrt 2 is utterly irrelevant ot mathematics. It is applicable to engineereing and to physics, and Newton's interests were broader than just pure mathematics.

    Moreover, you are attempting to show some sort of precocious knowledge, but are really demonstrating a lack of knowledge of the context in which the mathematics at hand is used. You have quite a bit more to learn.

    One thing that you need to learn is that the true nature of differential calculus is local approximation of a function by a linear function. This is not an "interpretation", but is in reality the correct definition if one is interested in generalizing the notion to functions of several variables. It is in fact the basis for multivariabl differential calculus, and while usually just called the derivative is sometimes found as the Frechet derivative. The important thing is that the derivative at a point is a linear functin and not just a number. In the case of a function of one variable, a single number is also a linear operator or equivalently a 1x1 matrix corresponding to the usual basis for the real numbers.
    Reply With Quote  
     

  15. #14  
    Guest
    Quote Originally Posted by DrRocket
    You apparentely did not read what I said. The approximation to sqrt 2 is utterly irrelevant ot mathematics. It is applicable to engineereing and to physics, and Newton's interests were broader than just pure mathematics.
    You should have said that you were referring to the square-root of two in particular than. Of course, since your post referred to my post, and my post referred to the utility of approximations (in particular Newton's method) as a whole, than it seems more plausible that you misinterpreted what I said, rather.

    Quote Originally Posted by DrRocket
    Moreover, you are attempting to show some sort of precocious knowledge, but are really demonstrating a lack of knowledge of the context in which the mathematics at hand is used.
    Calculus of one variable? I assure you I have no intention of attempting to make myself appear precocious or anything of the sort.

    Quote Originally Posted by DrRocket
    You have quite a bit more to learn
    I don't disagree whatsoever.

    Quote Originally Posted by DrRocket
    One thing that you need to learn is that the true nature of differential calculus is local approximation of a function by a linear function.
    I recall mentioning this in my post, although I specifically used the word interpretation (which is something that you have gone against apparently); but I'd hardly say that it suggests that I don't understand differential calculus.

    Quote Originally Posted by DrRocket
    This is not an "interpretation", but is in reality the correct definition if one is interested in generalizing the notion to functions of several variables. It is in fact the basis for multivariabl differential calculus, and while usually just called the derivative is sometimes found as the Frechet derivative. The important thing is that the derivative at a point is a linear functin and not just a number. In the case of a function of one variable, a single number is also a linear operator or equivalently a 1x1 matrix corresponding to the usual basis for the real numbers.
    As far as I have understood, multivariable differential calculus relies on, rather, what's known as a partial derivative; evaluated with respect to a single variable and the other variables all assumed as constants.

    As a whole, differential calculus has its foundations in the work of Rene Descartes and Pierre de Fermat; Descartes' work on finding the equation of the tangent line to conic sections resulted in his discovering the method of tangents. Fermat used geometric series (nowadays we use Riemann sum notation) to evaluate integrals of certain polynomials (although less brilliant than Archimedes finding the area under the curve of a parabola, Fermat's method was adopted by Newton and later Leibniz, and in fact, Leibniz's interpretation of the integral is based around the limit of a net accumulation of rectangular areas under a curve). Integral calculus, although far more difficult, had been encountered much earlier than differential calculus (back to the time of the ancient Greeks).

    Now back to the topic at hand: it seems as if our discussion is drifting off of subject; I may or may not choose to reply to your next post, although if you raise certain points that seem as if they require a reply, than I will do so. Otherwise, I'd rather use my time actually learning something than using my time to type up an arbitrary large post.
    Reply With Quote  
     

  16. #15  
    . DrRocket's Avatar
    Join Date
    Aug 2008
    Posts
    5,486
    [quote="Ellatha"

    As far as I have understood, multivariable differential calculus relies on, rather, what's known as a partial derivative; evaluated with respect to a single variable and the other variables all assumed as constants. [/quote]

    This may be what you understand, but it is not accurate. It simply reflects the fact that you don't understand multivariable calculus yet, or even the real basis for ordinary calculus. That will come with time and more strudy. It will come sooner if you drop the pretense of thinking that you understand this better than people who have studied the subject for years and been professionally engaged in mathematics for longer than you have been alive. and just listen a bit.

    It is possible and in fact preferable to define the derivative of a function on Euclidean space (extendable to a Banach space fairly easily) without reference to any particular basis or coordinate system. Partial derivatives only enter the picture when you represent the derivative as a matrix.

    The fundamental idea in differential calculus remains the approximation of a function, locally, with a linear function. This is a recurring theme in mathematics, the study of a complicated object (a fairly general class of functions_ by reducing the problem to the study of simply objects (here linear functions).

    Here is a short discussion of the notion of a Frechet derivative which lies at the heart of the matter. Don't get too excited about the talk about Banach spaces -- is a Banach space and it will do you no harm to restrict your attention to the siimpler context.

    You can find accessible treatments of the general idea of a derivative in almost any textbook at the level of advanced calculus. Some good ones are Principles of Mathematical Analysis by Rudin Elements of Real Analysis by Bartle a nd Calculus on Manifolds by Spivak.
    Reply With Quote  
     

  17. #16  
    Guest
    And what exactly is the problem with learning more advanced mathematics at an early age? Leibniz would recieve his Ph.D. at an age only three years more than mine is now, Newton developed calculus at only twenty-six, Gauss entered university at fifteen, amd Einstein mastered differential and integral calculus before fifteen to name a few. Who would blame these people for not limiting themselvs to needless repetition now?
    Reply With Quote  
     

  18. #17  
    . DrRocket's Avatar
    Join Date
    Aug 2008
    Posts
    5,486
    Quote Originally Posted by Ellatha
    And what exactly is the problem with learning more advanced mathematics at an early age? Leibniz would recieve his Ph.D. at an age only three years more than mine is now, Newton developed calculus at only twenty-six, Gauss entered university at fifteen, amd Einstein mastered differential and integral calculus before fifteen to name a few. Who would blame these people for not limiting themselvs to needless repetition now?
    There is nothing wrong with learning advanced mathematics at an early age.

    The trick is in leaning it and not just thinking that you know more than you do.

    What does "needless repetition" have to do with anything relevant to this thread ?
    Reply With Quote  
     

Bookmarks
Bookmarks
Posting Permissions
  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •