Hey all,

I'm a biophysics student using stochastic models of chemical reactions to understand some biological phenomena. Here's the problem posed as concisely as possible. Actually a combination of a physics and a comp sci / algorithmic problem.

GLOBAL PROBLEM: I have a simulation that is computationally intensive; running it in parallel across 120 nodes on a supercomputer still leads to > 24 hour runs to test 50,000 parameter sets of my model.

GOAL: Reduce computing time.

SUMMARY OF PROBLEM: I'm trying to get an unbiased estimate of the noise (variance) of a stochastic process. Given an initial value P_0, where P_0 is equal to the expected equilibrium / mean value of P, how long (T) do I need to simulate my process such that the system has forgotten it started at P_0? Then take the minimal time T determined previously and simulate it N times, taking the value at time T each individual run (and use it for statistics, etc).

ATTEMPT AT SOLUTION: It occurred to me that an autocorrelation plot sort-of achieves what I'm looking for. If you plot autocorr vs. lag time, and set a threshold of say 'lag time at which autocorr < .1' you should have an estimate of how long the system takes before the signal no longer correlates with itself.

PROBLEM.. FOR YOU: When I implemented this approach, I found that the threshold cross 'estimate' varied immensely with the length of time simulated. That is, if I simulated 5 seconds, the autocorr threshold was breached at say 4.5 seconds. But when I simulated 50 seconds, it was breached at 38. And at 5000 seconds, it crossed at 3600 seconds.

So my question to you is... why isn't the autocorrelation lag time at which the signal no longer correlates to itself an intrinsic property of the system?

This is probably hard to picture, let me know if images would help.