Any interest in this forum for codes and ciphers?
|
obviously X=81094![]()
yup, and RSA as well
is there any thread on this topic before?
Lol - i have a one million dollar prize for anyone interested in RSA![]()
I love cryptography.
that is cool,man.Originally Posted by river_rat
is it the one which asks for factorizing a thousand digit number?
How is that possible?
No, it is proving that RSA is secure - otherwise known as the P=NP problem.Originally Posted by lince!
Aka, proving the existence of one-way functions, right? BTW, why can't PSPACE-complete problems be used to generate one way functions (by constructing an instance with the desired answer)? Obviously NP-complete problems would only work if P!=NP, but is there something else against using PSPACE or EXPTIME-complete problems?
Its rather to prove that the multiplying two large primes together gives you a one-way function (i.e. factorising is hard!).
How would you encrypt data quickly with a PSpace-complete problem? RSA works because the mod arithmatic is quick.
hey, I am interested in the number theory applied in cryptography but i don't know much.
is that 'mod arithmetics' you are referring to 'euclidean algorithm?
Well, I was thinking something along the lines of generating an instance of a very hard problem (NP-complete or PSPACE-complete or something) such that it provable has only one solution (which would be your private key). Then that problem instance would be your public key. I don't know exactly how you'd go about applying the two though. Also, I don't know if generating such an instance would compromise its hardness....
The NP idea is exactly what RSA does (EDIT: finding a prime factor is a np problem). But if NP = P then RSA is not secure as you could rapidly break the encryption.
The problem with PSpace problems (or anything higher then NP) is that i dont think that you can check solutions on polynomial time - the fact that these problems are non-deterministic polynomial time complex is important.
Since when? Factoring is obviously NP, but I've always thought it wasn't known to be NP-complete (or in P of course).Originally Posted by river_rat
factoring being in P wouldn't mean RSA is sunk either. It's just a statement about asymptotic bounds for the run-time of the algorithm, what happens in the range of the inputs used in practice is what's more important. If you had a factoring algorithm that takes 10^100*n^10000 steps, n the number of digits, this is going to be worse than whatever non-polynomial time algorithm we currently have if you were trying to attack the rsa challenge numbers for example. For 'large enough' inputs the polynomial time algorithm will win, but 'large enough' may be a long way off.
from the other direction, if you could show factoring is not in P it wouldn't necessarily mean rsa is secure.
oops you're right - call it a brain fart. What i meant was proving NP=P would prove factoring is P.
parallel processing is important for the discussion though - given a P algorithm and a large server farm you could still handle the large multiplying factors. Well at least within reason.
That's true even without a polynomial time factoring algorithm. The various sieve algorithms scale very well with parallel processing, so given enough hardware your can factor a given number in as short a time as you like (some restrictions on communication between computers and organizing the whole thing of course).
The resources of your attackers are always under consideration when worrying about security.
Your benifit when using parallel processing with a polynomial time algorithm is quite a lot better (in general) then with an exponential time algorithm - given as much computer power as we have at the moment, factorising RSA 2048 would still take a while (even with a general number field sieve the number of calculations is of the order 10^14)
Why? Does doubling the number of processors do better than halve the time needed for a polynomial time algorithm?Originally Posted by river_rat
Im comparing the growth of p(n)/n (p(n) a polynomial) to e^p(n)/n - hence the general comment.
I don't follow you. How is this comparing multiple processors for a polynomial time algorithm vs. an exponential time algorithm (or other)? Are you suggesting the number of processors available is related to the input size??Originally Posted by river_rat
No - its just so that i am comparing the same processor growth rates in both cases. Feel free to substitute your own growth function, it was just the simpliest one i could think of.
Okie- doubling the number of processors will halve the time it takes to run a quadratic sieve (for example). the run time of this algorithm is totally irrelevant to how adding more processors affects the total time, double processors=>halves time (not quite, but close enough).
This is why I'm confused that you would say a polynomial time algorithm would benefit more from parallel processing. I don't see why you would link the input size and the number of processors and say this is still a comparison between how multiple processors affect the time when the input sizes are changing along with the number of processors. You're essentially saying "given access to the same hardware, the poly. algo. is better than the exp. algorithm as the input size grows." This is of course true and has nothing to do with parallel processing.
Of course there's also no guarantee a poly. time algorithm would benefit at all from parallel processing.
I thought we were comparing best hardware examples for an attacker?
A polynomial algorithm does not benifit more or less or in any general way (as that depends on the algorithm), what i am saying is that you can in general get a polynomial time algorithm to give you results within reasonable time using parallel processing as compared to an exponential time ones given the same hardware for both.
err, sure.Originally Posted by river_rat
You started comparing how parallel processing affects a poly. vs. exp. algorithm with:
but now you say:Originally Posted by river_rat
These are saying completely different things to me. the second I agree with, not the first.Originally Posted by river_rat
This only depends on how long it takes to get the result for the given input on 1 cpu and not at all on whether it was a polynomial time or an exponential time algorithm (this is assuming both scale with more processors).Originally Posted by river_rat
They say different things because they mean different things - perhaps im not being very clear here - its late my side of the world.
First quote - benefit in relation to the poor guy waiting for the response given the best server farm he can afford.
Second quote - benefit wrt to actual processing of the algorithm, some algorithms parallelise well and others dont.
My point is that in general p(n) < e^q(n) for a random p and q and thus your "wait time" is less for P programs then for Exptime ones.
Well, the thing is there is a tendency for polynomial algorithms to improve over time.
Anyway, my idea with the PSPACE-complete problem is that would certainly take exponential time to work out a solution. Would it actually be a problem that it would take a long time to work out if a given solution works or not? Assuming you built the instance in such a way that you know what the solution is, couldn't you use that as a key?
I suppose an NP-complete problem would be good in that a given solution could be checked quickly, but it would still be hard to find a solution. (Again assuming P != NP.) In either case, how would you go about building a public key system with it though?
« Bra-ket | Logarithms » |