Nothing Special   »   [go: up one dir, main page]

Academia.eduAcademia.edu

Numerical methods for stochastic simulation of biochemical systems

2010

Stochastic simulation of complex biochemical networks is a topic of current interest, in spite of the importance of the Gillespie algorithm it requires substantial amount of computational effort to simulate a complex system, Many algorithms are available now to control the two most important parameters in simulation: speed and accuracy. It can be proved that the stochastic modelling approach provide a conceptual bridge between stochastic chemical kinetics (the CME and SSA) and conventional deterministic chemical kinetics (the RRE).

International Journal of Sciences and Techniques of Automatic control & computer engineering IJ-STA, Volume 4, N° 2, December 2010, pp. 1268−1283. Numerical methods for stochastic simulation of biochemical systems Mohsen Ben Hassine1, Radhi Mhiri2, Lamine Mili3 1 ISET de Nabeul, Computer Engineering,Tunisia mbh851@yahoo.fr 2 Faculté de sciences de tunis, Electrical Engineering,Tunisia radhi.mhiri@yahoo.fr 3 Virginia tech, Electrical and Computer Engineering , USA lmili@vt.edu Abstract: Stochastic simulation of complex biochemical networks is a topic of current interest, in spite of the importance of the Gillespie algorithm it requires substantial amount of computational effort to simulate a complex system, Many algorithms are available now to control the two most important parameters in simulation: speed and accuracy. It can be proved that the stochastic modelling approach provide a conceptual bridge between stochastic chemical kinetics (the CME and SSA) and conventional deterministic chemical kinetics (the RRE). Keywords: SSA, CLE, Implicit tau, Explicit tau, Euler-Maruyama discretization, stiffness 1. Introduction Through this paper we will outline the foundations of stochastic chemical kinetics beginning from the chemical master equation (CME), then the numerical realization of the stochastic process using a Monte Carlo strategy known as the stochastic simulation algorithm (SSA), followed by the approximate accelerated algorithm called the tauleaping with its two essentials versions ( explicit and implicit), next we can under certain conditions prove that the tau leaping can lead to the stochastic differential equation which called the chemical Langevin equation (CLE), and when the molecular populations of the reacting species become larger (in the thermodynamic limit) the CLE can in turn sometimes be approximated by an ordinary differential equation called the reaction rate equation (RRE). Finally we will provide a detailed description of the stochastic algorithms (which rarely found in literature), compare and discuss their efficiency with some examples. This paper was recommended for publication in revised form by the editor Staff. Edition: CPU of Tunis, Tunisia, ISSN: 1737-7749 Numerical methods for stochastic simulation − M. Ben Hassine et al. 1269 2. The chemical master equation We consider a well-stirred system of molecules of N chemical species {S1,…, Sn} interacting through M chemical reaction channels: R1,…Rn. The system is assumed to be confined to a constant volume, and to be in thermal equilibrium at some constant temperature. With Xi(t) denoting the number of molecules of species Si in the system at time t, we want to study the evolution of the state vector X(t) = (X1(t),…XN(t)) given that the system was initially in some state X(t0) = x0. Each reaction channel Rj is assumed to be “elemental" in the sense that it describes a distinct physical event which happens essentially instantaneously. Elemental reactions are either unimolecular or bimolecular; more complicated chemical reactions (including trimolecular reactions) are actually coupled sequences of two or more elemental reactions. The system of biochemical reactions can be described as follow: N Rj : ∑ αi, j Xi → i =1 N ∑β i, j Xi for j : 1 to M i =1 Rj = reaction j (j = 1, 2, ..., M) M = number of reactions N = number of species Xi = X1,X2,..., XN = molecules of species i in the system Let hμ = Number of possible combinations of reactant molecules involved in reaction μ - The rate constant cμ - aμ = the probability of reaction μ occurring in time interval (t; t + dt) - aμ dt = h μ * c μ * dt - By considering a discrete infinitesimal time interval (t; t+dt) in which either 0 or 1 reaction occur, thus it exists only M + 1 distinct configurations at time t that can lead to the state X at time t + dt and as such, we can write our resulted probability function at time t + dt as a function of all possible precursor states at time t (Markov chain) M P(X, t + dt) = P(X, t) * P(no state change over dt) + P(X - v μ , t) * a ( X - v μ ) dt ∑ μ =1 - P(X, t + dt) : Probability to have X at t+dt time - P(X, t) : Probability to have X at t time M - P(no state change over dt) = 1 - ∑ μ a μ (x)dt =1 -P(X- vμ ,t) : Probability to be in state different from X, at t time -vμ is a stoichiometric vector defining the result of reaction μ on state vector X If we then note that lim dt → O P ( X ; t + dt ) − P ( X ; t ) / dt = ∂P ( X ; t ) / dt we can write now the chemical master equation that describes the stochastic dynamics of the system as (McQuarrie, 1967; Gillespie, 1992a): 1270 IJ-STA, Volume 4, N°2, December, 2010. (1) ∂P ( X ; t ) / dt = M aμ (X - vμ )P(X - vμ ; t) - aμ (X)P(X; t) ∑ μ =1 This is the chemical master equation (CME). In principle, it completely determines the function P(x, t | x0, t0) But the CME is really a set of nearly as many coupled ordinary differential equations as there are combinations of molecules that can exist in the system! So it is not surprising that the CME can be solved analytically for only a very few very simple systems, and numerical solutions are usually prohibitively difficult. To rise above this difficulty we can think at the reaction rate equation (RRE) of traditional deterministic chemical kinetics which can be described as: dXi (t ) / dt = ∑ M j =1 vijaj ( X (t )) (i = 1,....N) and proceed by probabilistic way as we will see later. 3. The Stochastic Simulation Algorithm (SSA) Since the chemical Master equation is rarely of much use in computing the probability density function P(x,t |x0,t0) of X(t), we need another computational approach. One approach that has proven fruitful is to construct numerical realizations of X(t), i.e., simulated trajectories of X(t) -versus-t . This is not the same as solving the CME numerically, as that would give us the probability density function of X(t) instead of samplings of that random variable. However, much the same effect can be achieved by either making a histogram or averaging the results of many realizations. The key to generating simulated trajectories of X(t) is not the CME or even the function P(x, t |x0,t0), but rather a new function, P(τ,j |x,t) (Gillespie, 1976). It is defined so that P(τ,j |x,t) dτ is the probability, given X(t) = x, that the next reaction in the system will occur in the infinitesimal time interval [t + τ; t +τ + dτ] and will be an Rj reaction. Formally, this function is the joint probability density function of the two random variables "time to the next reaction" (τ ) and "index of the next reaction" (j). To derive an analytical expression for P(τ,j |x,t) , we begin by noting that if P0(τ | x, t) is the probability, given X(t) = x, that no reaction of any kind occurs in the time interval [t; t + τ ], then the laws of probability imply the relation [9] p(τ , j | x, t) dτ = P0(τ | x, t) * aj (x)dτ The laws of probability also imply: P0( τ + dτ | x, t) = P0( τ | x, t) * [ 1 - ∑ N j =1 aj (x)dτ ] An algebraic rearrangement of this last equation and passage to the limit dτÎ0 results in a differential equation whose solution is easily found to P0(τ | x, t) = exp(-a0(x)τ ), where a0(x) = ∑ When we insert this result into the equation for p, we get: p(τ , j| x, t) = aj (x) exp(-a0(x) τ ) M j=1 aj (x), Numerical methods for stochastic simulation − M. Ben Hassine et al. 1271 The Equation above is the mathematical basis for the stochastic simulation approach. It implies that the joint density function of τ and j can be written as the product of the τdensity function, a0 (x) exp(-a0(x) τ ) and the j-density function, aj (x)/a0(x). We can generate random samples from these two density functions by using the inversion method of Monte Carlo theory (Gillespie, 1992b): Draw two random numbers r1 and r2 from the uniform distribution in the unit-interval, and select τ and j according to: τ = 1/ a0 ( x) ln(1/r1) j = the smallest integer satisfying j ∑ a ( x) > r a ( x) j' 2 0 j ' =1 Thus we arrive at the following version of the stochastic simulation algorithm (SSA) (Gillespie, 1976; Gillespie, 1977). 1. Initialize the time t = t0 and the system's state x = x0. 2. With the system in state x at time t, evaluate all the aj (x) and their sum a0(x). 3. Generate values for τ and j. 4. Effect the next reaction by replacing tÍ t + τ and xÍ x +vj. 5. Record (x; t) as desired. Return to Step 2, or else end the simulation. Algorithm 1: The SSA method Init(Tstart, Tfinal,C,X,V,a0,T) WHILE((T<=Tfinal)&& (a0>0)) compute(a) a0Ísum(a) IF (a0>0) r1Írand(1) r2Írand(1) tauÍ(-1/a0)*log(r1) jÍ1 cptÍa(1) WHILE(cpt<r2*a0) JÍj+1 CptÍcpt+a(j) END TÍT+tau XÍX+V(:,j) 1272 IJ-STA, Volume 4, N°2, December, 2010. END END 4. The Tau leaping The SSA and the CME are logically equivalent to each other; yet even when the CME is completely intractable, the SSA is quite straightforward to implement. The problem with the SSA is that it is often very slow. The source of this slowness can be traced to the factor 1/a0(x) in the τ computing formula : a0(x) can be very large if the population of one or more reactant species is large, and that is often the case in practice [4,8]. One approximate accelerated simulation strategy is tau-leaping (Gillespie, 2001). It advances the system by a pre-selected time τ which encompasses more than one reaction event. In its simplest form, tau-leaping requires that τ be chosen small enough that the Leap Condition is satisfied: The expected state change induced by the leap must be sufficiently small that no propensity function changes its value by a significant amount. The main idea of this method is to select many reactions in the same tau time to accelerate the speed of our algorithm. We recall that the Poisson random variable P(a, τ) is by definition the number of events that will occur in time τ given that a dt is the probability that an event will occur in any infinitesimal time dt, where ‘a’ can be any positive constant. Therefore, if X(t) = x, and if we choose τ small enough to satisfy the Leap Condition, so that the propensity functions stay approximately constant during the leap, then reaction Rj should fire approximately Pj (aj (x)*τ ) times in [t; t +τ ]: Thus, to the degree that the Leap Condition is satisfied, we can leap by a time τ simply by taking (Gillespie, 2001; Gillespie & Petzold, 2003): X(t + τ ) = X + M ∑ v * p (a ( x(t ))τ ) j j j j =1 It will result in a faster simulation than the SSA to the degree that the total number of reactions leapt over the tau time is large compared to M. In order to use this simulation technique efficiently, we obviously need a way to estimate the largest value of τ that is compatible with the Leap Condition. One possible way of doing that (Gillespie & Petzold, 2003), is to estimate the largest value of τ for which no propensity function is likely to change its value during τ by more than ε*a0(x), where ε (0 < ε << 1) is some pre-chosen accuracy control parameter. The main problem for the tau leaping is: how to choose the tau time that respect the two constraints , τ must be so small that the propensity functions stay approximately constant during the leap, and τ must be so large that many reactions can fire many times ?. Gillespie and petzold (in 2003) showed that the largest value of τ that satisfies these conditions can be estimated as follows [1, 5]: Numerical methods for stochastic simulation − M. Ben Hassine et al. 1273 ⎧ N ⎪ fjj '( x) ≡ ∑ ∂aj(x)/∂xi vij'. ⎪ i =1 ⎪ M ⎪ (2) ⎨ μ j ( x) ≡ ∑ fjj'(x)/aj' vij' j'=1 ⎪ ⎪ 2 M ⎪σ j ( x) ≡ ∑ f 2 jj'(x)aj' (x) ⎪⎩ j'=1 j , j' = 1.. M The first Expression is the a jacobian matrix that compute the relative change of the propensities functions depending on the species concentrations, the second is the average of this change, and the third is the variance. Tau time must be taken according to the next formula : (3) } τ = min j ∈ [1, M] {εa0(x)/ μj(x) , (εa0(x)) 2 /σj 2 (x) More later (in 2006) Cao and Gillespie have proposed a new tau-selection procedure as follows: (4) } τ ' = min i {max {ε xi/gi , l} μ i(x) , max {(ε xi / gi, l} / σ i 2 ( x) 2 Algorithm 2: The Tau leaping Method (with formula 3) Init(Tstart, Tfinal,C,X,V,a0,T,eps) WHILE((T<=Tfinal)&& (a0>0)) compute(a) a0Ísum(a) IF (a0>0) jacÍcompute-jacobian(aj(xi)) fiiÍjac*v muÍfii*a sigcarreÍ(fii.^2)*a errglobÍeps*a0 tauÍmin(errglob/mu(x),errglob^2/sigcarre) TÍT+tau MÍGenerate_M_Poissonv(a(j)*tau) XÍX+V*M END END Although the tau leaping method is so attractive in the sense that it accelerate the time running of the SSA But it is not as foolproof as the SSA. If one takes leaps that are too large, some species populations might be driven negative. moreover this method is not 1274 IJ-STA, Volume 4, N°2, December, 2010. adequate to stiff systems as we will see next. Several strategies have been proposed to get around this problem. Tian and Burrage, and independently Chatterjee et al, proposed approximating the unbounded Poisson random numbers Kj with bounded binomial random numbers [7,18]. But it turns out that it is usually not the unboundedness of the Poisson kj's that produces negative populations. 5. The implicit Tau-leaping method Stiffness can be defined roughly as the presence of widely varying time-scales in a dynamical system, the fastest of which is stable. It poses special problems for the numerical solution of both deterministic ordinary differential equations (ODEs) and stochastic differential equations (SDEs), particularly in the context of chemical kinetics. A stiff system has (at least) two time scales. There is a long (slow) time scale for the quasi-equilibrium phase, and a short (fast) time scale for the transient phase following a perturbation. The more different these two time scales are, the "stiffer" the system is said to be. A system of ODEs is said to be stiff if its solutions show strongly damped behaviour as a function of the initial conditions (fast evolution in a few time) [1, 11]. The restriction of the explicit Euler method to time steps τ that are on the order of the short (fast) time scale makes the method very slow for stiff systems. So it is natural to ask if there are other solution methods for which the time steps are not restricted by stability, but only by the need to resolve the solution curve. It is now widely recognized that a general way of doing this is provided by "implicit" methods (Ascher & Petzold, 1998), the simplest of which is the implicit Euler method with the next form: Xn + 1 = Xn + τ f(Tn + 1, Xn + 1) In contrast to the explicit Euler formula: Xn + 1 = Xn + τ f(Tn, Xn) If we consider g(Xn + 1) = Xn + 1 - Xn - τ f(Tn + 1, Xn + 1) Usually, the most efficient way to numerically solve iteration: Xn + 1 - Xn = - g(Xn) /g' (xn) Algorithm 3: The Newton algorithm for NLAE Init( max_iterations , nlae_tolerance ,h) Initialise X0 = Xn FOR k = 1 TO max_iterations g(Xn + 1) = 0 is by the Newton Numerical methods for stochastic simulation − M. Ben Hassine et al. 1275 x (k ) = x ( k − 1) θf ⎞ ⎛ − g ⎜1 − h ⎟ θx ⎠ ⎝ −1 g Í Xk-1 - Xn – h* f if |g| <nlae_tolerance exit calculate END Xn +1Í Xk-1 An implicit scheme which takes large steps (on the time scale of the slow mode) will do just fine if the fluctuations of the slow manifold are negligibly small. To get the implicit formula of the chemical system (Rathinam et al., 2003) we can modify the explicit method as follows: M X(t + τ ) = X + ∑ vj * pj ( aj ( x (t ))τ ) (* explicit scheme *) j =1 M X(t + τ ) = X + ∑ vj * pj ( aj ( x (t + τ ))τ ) (* implicit scheme *) j =1 The right part of the last formula is simply the Poisson function expressed at time t+ τ , the problem is that this function pj ( aj ( x (t + τ )) is unknown at time t , then we must give it an approximate value . The idea is to use the equation pj ( aj ( x (t + τ ))) = aj ( x (t + τ )) τ + pj ( aj ( x (t + τ ))) − aj ( x (t + τ ))τ the expression in the right is the sum of two terms the first is the propensity function at time t+τ and the next is a zero mean expression , let change this formula by : pj (aj ( x(t + τ ))) = aj( x(t + τ ))τ + ( pj (aj ( x(t )) − aj( x(t ))τ ) , thus we can write the implicit scheme as: M M (5) X (t + τ ) ≈ X (t ) + ∑ vjaj ( X (t +τ ))τ + j =1 ∑ v [P (a ( X (t)),τ − a ( X (t))τ ] j j j j j =1 Or (Y. Cao and L. Petzold 2005) with trapezoidal form M (6) X(t + τ ) = X + ∑ vj( p(aj( x(τ )) − τ / 2 * aj( x) + τ / 2 * aj( x(t + τ ))) j =1 Algorithm 4 : The implicit Tau-leaping method (formula 5) Init(Tstart, Tfinal,C,X,V,a0,T,eps,N) WHILE((T<=tfinal)&& (a0>0)) compute(a) 1276 IJ-STA, Volume 4, N°2, December, 2010. a0Ísum(a) IF (a0>0) jacÍcompute-jacobian(aj(xi)) fiiÍjac*v muÍfii*a sigcarreÍ(fii.^2)*a errglobÍeps*a0 tauÍmin(errglob/mu(x),errglob^2/sigcarre) TÍT+tau MÍGenerate_M_Poissonv(a(j)*tau) FOR j=1:N DO cst1Í(M(i)-a(i)*tau)*v(:,i) END v0ÍX; compute(fp,f0) IÍNewton( max_iterations , nlae_tolerance ,tau,v0,fp,f0) XÍi; END END 6. The Chemical Langevin Equation and the Reaction Rate Equation If the molecular populations of the considered reacting species are larger, we can go further in the approximation of our equations as follows: The term aj (x) τ becomes so large that aj (x) τ >> 1, the physical significance of this last condition is that each reaction channel is expected to fire many more times than once in the next τ time. This approximation stems from the purely mathematical fact that the Poisson random variable P(a, τ ), which has mean and variance aτ, can be well approximated when aτ >> 1 by a normal random variable with the same mean and variance. Denoting the normal random variable with mean m and variance σ2 by N(m, σ2 ), and as we know that N(m, σ2 )=m+σ N(0,1), we can deduce that (Gillespie, 2000; Gillespie, 2002) [10]: (7) P j ( a j ( x ), τ ) ≈ N j ( a j ( x )τ , a j ( x )τ ) = a j ( x )τ + ( a j ( x )τ )1 / 2 N j ( 0 ,1) M Thus: X (t + τ ) ≈ X (t ) + M ∑ v a ( x)τ + ∑ v j aj ( x) Nj (0,1) j j j =1 τ j =1 This equation is called the Langevin leaping formula It evidently expresses the state increment X(t+τ )-x(t) as the sum of two terms: a deterministic drift term proportional to τ , and a fluctuating diffusion term proportional to τ . It is important to keep in mind that this equation is an approximation, which is valid only to the extent that τ is small enough that no propensity function changes its value significantly during τ , yet large enough that every reaction fires many more times than once during τ . The next formula is underscored by the fact that X(t) therein is now a continuous (real- Numerical methods for stochastic simulation − M. Ben Hassine et al. 1277 valued) random variable instead of a discrete (integer-valued) random variable; we lost discreteness when we replaced the integer-valued Poisson random variable with a realvalued normal random variable. The Langevin leaping formula gives faster simulations than the tau-leaping formula not only because the condition aj (x) τ >> 1, implies that very many reactions get leapt over at each step, but also because the normal random numbers that are required can be generated much more easily than the Poisson random numbers (Press et al., 1986). If we subtract x from both sides and then divide through by dt , the result can be shown to be the following (approximate) stochastic differential equation, which is called the chemical Langevin equation (CLE) (Gillespie, 2000; Gillespie, 2002): M (8) dX ( t ) / dt ≈ ∑ M v j a j ( X ( t )) + j =1 ∑v j a j ( X ( t )) Γ j ( t ) j =1 The Γj (t) here are statistically independent “Gaussian white noise" processes. Molecular systems become “macroscopic" in what physicists and chemists call the thermodynamic limit. This limit is formally defined as follows: The system volume and the species populations Xi all approach ∞, but in such a way that the species concentrations Xi all remain constant. To discern the implications of those formulas in the thermodynamic limit, we evidently need to know the behaviour of the propensity functions in that limit. It turns out that all propensity functions grow linearly with the system size as the thermodynamic limit is approached. It follows that, as the thermodynamic limit is M approached, the deterministic drift term ( ∑ vjaj ) grows like the size of the system, j =1 while the fluctuating diffusion term grows like the square root of the size of the system, and likewise for the CLE . This establishes the well known rule-of-thumb in chemical kinetics that relative fluctuation effects in chemical systems typically scale as the inverse square root of the size of the system. In the full thermodynamic limit, the size of the second term on the right side will usually be negligibly small compared to the size of the drift term, in which case the CLE can be reduced to the RRE (drift term). Thus we have derived the RRE as a series of limiting approximations to the stochastic theory that underlies the CME and the SSA M (9) RRE: dX i ( t ) / dt = ∑v ij a j ( X ( t )) (i = 1..N) j =1 Now, if we return to the CLE with its numerical form (7), readers familiar with numerical methods for stochastic differential equations (SDEs) will recognize it as an Euler-Maruyama discretization of the continuous time problem dX(t) = M M j =1 j =1 ∑ vj * aj ( x (t ) dt + ∑ vj aj ( x ( t )) dwj ( t ) where the Wj(t) are independent scalar Brownian motions, the next code presents how we can simulate the Brownian motion. 1278 IJ-STA, Volume 4, N°2, December, 2010. Algorithm 5 : The Brownian motion Init(Tfinal,step,w,dw) NÍTfinal/step FOR j = 1:N aleaÍrandn; dW(j) Í sqrt(step)*alea W(j) Í W(j-1) + dW(j); END The next code presents code presents how we can implement the CLE with an Euler-Maruyama discretization Algorithm 6 : The Chemical langevin equation Init(Tstart, Tfinal,C,X,SM,a0,T,eps,N,tau,M,Som) WHILE ((T<=Tfinal)&& (a0>0)) compute(a) a0Ísum(a); FOR j=1:M Ad(j)Í (tau*a(j) + sqrt(abs(tau*a(j)))*randn)*SM(:,j) SomÍsom+Ad(j) END X Í X + som TÍT+tau END 7. Application and Results In this section we will see how we can decide or choose the best method for a given system? . As we have seen before, there are two criteria for the choice: the population size and the stiffness of the system. The population size considered can be large or small (or ‘mixt’ where some species are of large size and others are not) , the system can be stiff or non stiff . Let the variable ‘Nat’ be a set of two values ‘S’ and ‘NS’, Nat={‘S’, ‘NS’ }, and the variable ‘pop’ is a set of three values ‘Allbig’, ‘Allsmall’ and ‘mixt’, pop={‘Allbig’, ‘Allsmall’, ‘mixt’}, the next table presents the choice of the best method depending on these variables ( not always true !) . POP/NAT Allbig Allsmall S RRE (or Implicit) SSA NS RRE (or CLE) SSA Numerical methods for stochastic simulation − M. Ben Hassine et al. 1279 Implicit Explicit Mixt We can add that even the explicit or the implicit method can use in its internal code an appel to SSA techniques or adaptative tau selection ( ex. interlaced implicitÆ downshifting) for the tau choice, in the limit where the species size becomes so small that the tau leaping formula can’t give a good choice. In general there is no way to predict the nature of the system !, because if the system presents dynamical behaviour such that in one time period it is stiff but in another time period it is non stiff, we can not resolve the system efficiency with one method, we must necessarily use an adaptive simulation strategy to automatically switch between explicit and implicit tau leaping methods and even mix the three methods in some stiff systems. To see the effect of stiffness in simulation we present the Decaying-Dimerizing Reactions. The system is presented as follows [11]: S1Æ Ø S1+ S1 Æ S2 S2 Æ S1+ S1 S2 Æ S3 With parameters vector: C=(1, 10 ,1000,0.1) , initial conditions vector X0=(400,798,0) and final time=0.2 sec , the curves obtained are the next : Fig1: SSA Fig2: Explicit tau Damped fluctuations Fig3: Implicit tau Another example is the Lotka volterra or predator-prey model, presented as follows : Y1Æ 2 Y1 Y1+Y2 Æ 2 Y2 1280 IJ-STA, Volume 4, N°2, December, 2010. Y2 Æ Ø With parameters vector: C=(0.1, 0.002 ,5e-4) , initial conditions vector X0=(100,50) and final time=10 sec , the curves obtained are the next : Fig4: SSA Fig5: Explicit tau Fig6: Implicit tau The time running of each method was presented in the next table (Computer: Pentium 4 with 2.1Ghz as frequency, System: windows XP, Soft: Mathworks Simbiology) Method/ system SSA Explicit Implicit DecayingDimerizing 93,49 0.654 3.061 predator-prey 0.16 0.013 0.04 Through many applications that we have made with some systems, we can conclude the next results: - The SSA is until now the more straightforward algorithm and its stability is guarantied, nevertheless its slowness in large population systems case make it useless - The explicit and implicit tau leaping efficiency depends on the efficiency of the tau selection formula - An efficient generic algorithm is until now a topic of future research, such algorithm that can rise above speed and accuracy Numerical methods for stochastic simulation − M. Ben Hassine et al. 1281 8. Conclusion Through this paper we have seen some numerical methods (based on Gillespie and al. group research) to solve stochastic differential equations in biochemical systems from the chemical master equation to the RRE, we have proved that there is a logical relation between both, and that each chemical system have its own and best method to be simulated. It remains until now, that there is no way to find a generic algorithm that can solve all the kind of systems (see slow scales algorithms and the new algorithm Weighted Stochastic Simulation Algorithm WSSA [16,18]) , especially the stiff systems (most cases of biochemical systems) , which remains the challenge for the future research [1] . For further information refer to our web site : http://automatictn.wikispaces.com, biosystèmes section) References [1] Cao, Y., Daniel T. Gillespie and Linda Petzold. The Adaptive Explicit-Implicit Tau-Leaping Method with Automatic Tau Selection. Draft copy [2] Cao, Y., H. Li and L. Petzold. Efficient formulation of the stochastic simulation algorithm for chemically reacting systems. J. Chem. Phys., 121:4059-67, 2004. [3] Cao, Y., L. Petozld, M. Rathinam and D. Gillespie. The numerical stability of leaping methods for stochastic simulation of chemically reacting systems. J. Chem. Phys, 121:12169-78, 2004. [4] Rathinam, M., L. Petzold, Y. Cao, D. Gillespie. Consistency and stability of tau leaping schemes for chemical reaction systems. SIAM Multiscale Modeling, 4:867895, 2005. [5] Cao, Y., D. Gillespie and L. Petzold. Efficient Stepsize Selection for the TauLeaping Method. J. Chem. Phys., 124:044109, 2006. [6] Cao, Y., D. Gillespie and L. Petzold. The slow-scale stochastic simulation algorithm. J.Chem. Phys., 122:014116, 2005. [7] Tian, T., and K. Burrage. Binomial leap methods for simulating stochastic chemical kinetics. J. Chem. Phys., 121:10356-64, 2004. [8] Cao, Y., D. Gillespie and L. Petzold. Avoiding negative populations in explicit tau leaping. J. Chem. Phys., 123:054104, 2005. [9] Gillespie, D. T. (1977). Exact stochastic simulation of coupled chemical reactions. J. Phys. Chem. 81(25): 2340-2361. [10] Gillespie, D. T. (2000). The chemical Langevin equation. J. Chem. Phys. 113,297306. [11] Rathinam, M., Cao, Y., Petzold, L. & Gillespie, D. (2003). Stiffness in stochastic chemically reacting systems: the implicit tau-leaping method. J. Chem.Phys. 119, 12784-12794. [12] McQuarrie, D. A. (1967). Stochastic approach to chemical kinetics. J. Appl. Prob. 4: 413-478. [13] McAdams, H. H., Arkin, A., 1997. Stochastic mechanisms in gene expression. Proc. Natl. Acad. Sci. U. S. A. 94, 814-819. [14] Gibson, M. A. & Bruck, J. (2000). Exact stochastic simulation of chemical systems with many species and many channels. J. Phys. Chem. 105, 1876-1889. 1282 IJ-STA, Volume 4, N°2, December, 2010. [15] Cai, X., X. Wang Stochastic Modeling and Simulation of Gene Networks, A review of the state-of-the-art, research on stochastic simulations , IEEE signal processing magazine january 2007 [16] El Samad, H., M. Khammash, Stochastic modelling of gene regulatory networks, international journal of robust and nonlinear control, int. j. robust nonlinear control 2005; 15:691–711 [17] Chatterjee, A., D. G. Vlachos and M. A. Katsoulakis. Binomial distribution based tau-leap accelerated stochastic simulation. J. Chem. Phys., 122:024112, 2005. [18] Dan T. Gillespie, Min Roh and Linda R. Petzold, Refining the Weighted Stochastic Simulation Algorithm, J. Chem. Phys. 129,165101 (2008)