
Quest on P = NP
Without exaggeration, does P = NP means P (runs in polynomial time i.e. 1 second on a deterministic Turing machine) must be equivalent to NP (runs in polynomial time i.e. 1 second on a nondeterministic Turing machine)? For example:
P: Get a subset that sum to 0 in polynomial time let say 1 second on a computer i.e. (deterministic Turing machine e.g. the usual traditional computers that has only one possible action that the computer might take and sequential)
NP: Get a subset that sum to 0 on a “specified” computer i.e. (nondeterministic Turing machine e.g. super computers with multi threading processor and large memory that can spawn different threads that generate every possible solution and check them all at the same time. Such that each individual verification thread can run in polynomial time i.e. let say 1 second, and they’re all being run at the same time.

It more says that finding a solution in O(size^n) on a nondeterministic turing machine implies the existence of a program that finds a solution in O(size^(n*m)) on a deterministic turing machine. So the running times do not need to even be comparable (m could be 10^100 for example with some huge constant hidden away in the big O notation)

Condition 1: In essence does that means if someone has an algorithm used on a supercomputer that could just spawn different threads that generate every possible solution and check them all at the same time. If each individual verification thread runs in polynomial time , and they're all being run at the same time identifying quickly subset that sum to 0 as such a person prove that P = NP?
Condition 2: Or to prove that P = NP basically, someone needs an algorithm to find a subset that sum to 0 in polynomial time first on a deterministic Turing machine i mean on the normal usual computers?
Condition 3: Or it is compulsory that someone must have an algorithm that must be able to work for the two conditions 1 and 2?

It's 2, but as river_rat pointed out, it might still run a whole lot slower than the nondeterministic algorithm, but if P = NP, then it must run in some polynomial time.
The reason 1 doesn't work is due to the nature of exponentials. Take an n^10 algorithm and an 2^n algorithm. Let's say you've got the algorithms optimized so they both run n=100 in one hour. Now, you buy a computer that's 1,000,000 times faster. How much bigger of a problem can you run and still keep it under 1 hour? For the n^10 algorithm, you'd now be able to run up to n=398 (multiply by 10th root of a million). For the 2^n, you'd be able to run up to n=120 (add the log base 2 of a million). It's that difference between adding and multiplying that means no supercomputer will ever run an n=1000000 instance of a 2^n problem.
There's also the fact that any time a polynomial algorithm with a huge power has been discovered, someone manages something clever to reduce it down to something reasonable. While the base of exponential algorithms also tends to get reduced some, it's still exponential.