Cheap and Secure Web Hosting Provider : See Now

Simulating continuous time semi-Markov state machine and changing transition probability on the fly

, , No Comments
Problem Detail: 

The problem that I'm trying to solve (well, I think that I almost did, but need a review from someone more experienced) is about changing probability of the transition for semi-Markov state machine during simulation.

Let's assume that M=<S,T> defines machine with S states and T probabilistic transitions between states. Elements of T are triplets (s_out, s_in, lambda). So the probability of moving from state s_out to s_in is exponential and defined as exp(-lambda*t).

To simulate it I get random value from [0;1), transform it to exponential and save as next event. What happens is that after saving it but before invoking another event may occur (so it is not a pure Markov state machine) and change the lambda. So the event time should be recalculated.

The problem is that it cannot be just recalculated as such changes may occur quite frequently constantly shifting the event time to the future.

What I think is correct is to adjust recorded event time on ratio time_to_event_from_now * lambda_old / lambda_new. Does it sounds right to you? Maybe the problem description recalls the paper/book where this problem solved?

Asked By : AlexAtNet

Answered By : D.W.

As it happens, recalculating the event time is exactly the right thing to do. This follows from the fact that the exponential distribution is memoryless; or from the fact that exponential distribution is the time until the next event in a Poisson process. I realize this might sound miraculous and unbelievable, so let me explain why, in detail.

Let's suppose that at time $t_0$ you have lambda equal to $\lambda_0$; then at time $t_1$, lambda changes to the value $\lambda_1$. In other words, at every point in the time interval $[t_0,t_1)$, the value of lambda is $\lambda_0$; at every point in the time interval $[t_1,t_2)$, the value of lambda is $\lambda_1$.

You are simulating a Poisson process with parameter lambda. During the time interval $[t_0,t_1)$, the Poisson process has parameter $\lambda_0$. During time interval $[t_1,t_2)$, the Poisson process has parameter $\lambda_1$. You want to simulate this process. In particular, you want to simulate the time of the first event.

The brute-force way to simulate this process would be to divide time up into very small increments, say of length $\epsilon$, so that time is broken up into small intervals $[t_0,t_0+\epsilon)$, $[t_0+\epsilon,t_0+2\epsilon)$, $[t_0+2\epsilon,t_0+3\epsilon)$, ..., $[t_2-\epsilon,t_2)$. In each interval, you have a value for lambda: its value is $\lambda_0$ for the first $(t_1-t_0)/\epsilon$ small intervals and $\lambda_1$ for the next $(t_2-t_1)/\epsilon$ small intervals. In each small interval, you draw from a Poisson distribution with parameter $\lambda \epsilon$ (i.e., $\lambda_0 \epsilon$ or $\lambda_1 \epsilon$, depending upon whether the small interval falls before or after $t_1$). Because $\lambda_i \epsilon$ will be a very small value, this is essentially equivalent to flipping a biased coin for each small interval; with probability $\lambda_i \epsilon$, you say that the event has happened in that small interval, and with probability $1-\lambda_i \epsilon$ nothing happens in that small interval.

Thus, this gives you a brute-force strategy to simulate your process. This brute-force strategy will give you the right distribution of events. However, it has the disadvantage that it is computationally expensive: it requires $(t_2-t_0)/\epsilon$ coin flips.

Fortunately, there is an equivalent way to do it that yields the same distribution of events, but is much more computationally efficient. Here is the optimized strategy for how to simulate the time of the first event:

  • Randomly draw a value $d$ from the exponential distribution with parameter $\lambda_0$. If $d<t_1-t_0$, then output that the first event happens at time $t_0+d$.

  • Otherwise (if $d \ge t_1-t_0$), randomly draw a value $d'$ from the exponential distribution with parameter $\lambda_1$ (independent of $d$). Now, if $d'<t_2-t_1$, then output that the first event happens at time $t_1+d'$; otherwise, output that no event happens during $[t_0,t_2)$.

This optimized strategy lets you simulate the time of the first event. (If you want to simulate the time of all events in the interval $[t_0,t_2)$, you can simulate the time of the first event, if any, and then iterate to figure out the time of the next event if there was a first event, repeating until done.) It is efficient, as you can see.

Moreover, this optimized strategy gives the correct distribution. It is equivalent to the brute-force method described above: it gives the same distribution on the time of the first event (as $\epsilon \to 0$). The equivalence follows from the fact that, if you have a Poisson process with parameter $\lambda$, then the time of the first event is exponentially distributed with parameter $\lambda$. (See Wikipedia.)

Finally, you can note that my optimized strategy is exactly equivalent to recalculating whenever lambda changes. It follows that recalculating whenever lambda changes does give you the right answer. Amazing, huh?

Don't expect this to work for all distributions. This is a special property of the exponential distribution.

Best Answer from StackOverflow

Question Source : http://cs.stackexchange.com/questions/47686

3200 people like this

 Download Related Notes/Documents

0 comments:

Post a Comment

Let us know your responses and feedback