In [ ]:
from IPython.core.display import HTML

def _set_css_style(css_file_path):
   """
   Read the custom CSS file and load it into Jupyter.
   Pass the file path to the CSS file.
   """
   styles = open(css_file_path, "r").read()
   s = '<style>%s</style>' % styles
   return HTML(s)

_set_css_style('rise.css')

Differential equation modeling¶

print view
notebook

  • Differential equations in biology
  • Euler's method for solving differential equations
  • Detecting numerical errors
  • Simple chemical reaction examples

ODEs for biology¶

  • Biochemical reaction rates often depend on current state → time derivatives
  • Examples: degradation ($A \xrightarrow{k} \varnothing$), reversible binding ($A \rightleftharpoons B$), autocatalysis, basic population models
  • More complex: gene regulatory networks, stochastic differential equations describing motion of cells/animals, ...

Example: irreversible decay: $A \xrightarrow{k} \varnothing$¶

Model: $$ \frac{dA}{dt} = -k\,A, \qquad A(0)=A_0.$$

Appears, for example, as a very simple pharmacokinetic model.

Analytical solution: $$ A(t) = A_0\,e^{-kt}. $$

We'll examine both exact and numerical solutions.

Generating a range of equally spaced values in numpy¶

np.arange(start, stop, step) is analogous to range

In [ ]:
import numpy as np
np.arange(5)
In [ ]:
np.arange(1, 5)
In [ ]:
np.arange(1, 5, 0.5)

Similar functions: np.linspace and np.logspace¶

In [ ]:
# usage: numpy.linspace(start, stop, num=50, endpoint=True)
print(np.linspace(0, 5, 10))  # num controls **number** of points, not the step!
In [ ]:
print(np.linspace(0, 5, 10, endpoint=False))
In [ ]:
# usage: numpy.logspace(start, stop, num=50, endpoint=True, base=10.0)
print(np.logspace(0, 2, 10))  # start and stop specify the **exponent**
In [ ]:
print(np.logspace(0, 2, 10, base=2))

Modeling exponential decay¶

$$\frac{dA}{dt} = -k\,A, \qquad A(0)=A_0.$$

has the solution $A(t) = A_0\,e^{-kt}.$

In [ ]:
# Parameters (with units in names for clarity)
A0 = 1.0        # initial concentration [arbitrary units]
k_per_s = 0.8   # rate constant [1/s]
t_end_s = 6.0   # total simulation time [s]
dt_s = 0.1      # simulation time step size [s]
In [ ]:
# Analytical solution
t_s = np.arange(0, t_end_s, dt_s)
A_exact = A0 * np.exp(-k_per_s * t_s)
In [ ]:
import matplotlib.pyplot as plt

plt.plot(t_s, A_exact, linestyle='--', label='Exact')
plt.xlabel('time [s]')
plt.ylabel('A [a.u.]')
plt.legend();

Euler's method¶

For an ODE $\,\dfrac{d\mathbf{x}}{dt} = \mathbf{f}(\mathbf{x}, t)\,$ and small step $\Delta t$:

$$ \mathbf{x}(t+\Delta t) \;\approx\; \mathbf{x}(t) + \Delta t \cdot \mathbf{f}(\mathbf{x}(t), t)\,. $$

This is a first-order method: when you halve $\Delta t$, the error should roughly halve.

Integrating ODEs "by hand" as an exercise¶

  • Many packages in Python, but some functions you'll still need to code yourself
  • Connection between underlying model and implementation in code
  • Critically assessing results, important even when using builtin functions or packages

Implementing Euler's method¶

Given current value x(t), the derivative f(x(t), t), and time step dt:

x(t+dt) = x(t) + (dt * f(x(t), t))

Start with an initial value x(0) = x0 and step forward in time.

In [ ]:
A_euler = [A0]
for t in t_s[1:]: # the first time is 0, so skip this one
    temp_A = A_euler[-1] + (dt_s * -k_per_s * A_euler[-1])
    A_euler.append(temp_A)
In [ ]:
plt.plot(t_s, A_exact, linestyle='--', label='Exact')
plt.plot(t_s, A_euler, label='Euler')
plt.xlabel('time [s]')
plt.ylabel('A [a.u.]')
plt.legend();

An unwise choice¶

In [ ]:
dt_big = 1.95 # that's a large time step
t_big = np.arange(0, t_end_s, dt_big)

A_big = [A0]
for t in t_big[1:]: 
    A_big.append(A_big[-1] + (dt_big * -k_per_s * A_big[-1]))

What's wrong with this picture?¶

In [ ]:
plt.plot(t_s, A_exact, linestyle='--', label='Exact')
plt.plot(t_big, A_big, label='Big time step')
plt.xlabel('time [s]')
plt.ylabel('A [a.u.]')
plt.legend();

How do we assess the output?¶

  • Compare against a known solution
  • What if you don't know the solution?
    • Consider a simpler test case
    • Think about basic intuition (e.g., A should decrease smoothly and rapidly but never become negative)
    • Consider limits (what should happen after a long time? what if a parameter is large/small?)
  • Implement simple checks and tests (more next time)

Testing a range of dt values with nested loops¶

In [ ]:
dt_range = [0.08, 0.16, 0.32, 0.64, 1.28]
t_range = []
A_range = []

for dt in dt_range:  # Loop through dt values and do Euler's method
    t_vals = np.arange(0, t_end_s, dt)
    A_vals = [A0]
    for t in t_vals[1:]:  # A loop within a loop
        A_vals.append(A_vals[-1] + (dt * -k_per_s * A_vals[-1]))

    t_range.append(t_vals)  # Save time and conc. vals for plotting
    A_range.append(A_vals)
In [ ]:
plt.plot(t_s, A_exact, linestyle='--', label='Exact')
for i in range(len(dt_range)):
    plt.plot(t_range[i], A_range[i], label='dt = %.2f' % dt_range[i])
plt.xlabel('time [s]')
plt.ylabel('A [a.u.]')
plt.legend();

Exponential decay on the log scale¶

In [ ]:
plt.plot(t_s, A_exact, linestyle='--', label='Exact')
for i in range(len(dt_range)-1):  # omitting the largest dt, where A becomes negative
    plt.plot(t_range[i], A_range[i], label='dt = %.2f' % dt_range[i])
plt.xlabel('time [s]')
plt.ylabel('A [a.u.]')
plt.yscale('log')
plt.legend();

Caution: nested loops can be slow!¶

No problem for short loops, but number of calculations needed scales multiplicatively as you add depth

Often better to apply a different algorithm or vectorize code to perform operations in parallel

Activity¶

  • Decay half-life. For $k=0.8\,\mathrm{s^{-1}}$, verify the half-life $t_{1/2}=\ln 2/k$ using Euler with successively smaller dt. How small must dt be for <1% error?
  • Error estimation. Compute the average squared error between the true (analytical) and approximate solutions for irreversible decay. How do these change as you change dt?
  • Reversible reaction. Consider the reversible reaction $A \;\rightleftharpoons\; B$, described mathematically by $$\frac{dA}{dt} = -k_1 A + k_2 B,\qquad \frac{dB}{dt} = \phantom{-}k_1 A - k_2 B.$$
    • Numerically solve for the concentrations of A and B using $k_1 = 0.6\,\mathrm{s^{-1}}$ and $k_2 = 0.2\,\mathrm{s^{-1}}$, with $A_0 = 1$ and $B_0 = 0$.
    • Plot the concentrations of A and B over time. How do you expect these to behave, and does your solution match with your expectations?
    • What is the exact solution for the concentrations over time? Plot the exact solution against the numerical one.
  • Error vs dt. Repeat your error estimation analysis for the reversible reaction.
In [ ]:
# half-life estimation

import numpy as np

A0 = 1
k_per_s = 0.8
t_end_s = 6
dt_vals = np.logspace(-3, 0, 10)[::-1]

t_half = np.log(2)/k_per_s

print('t_half (exact) = %.3f\n' % t_half)

print('dt\tt_half (approx)')
for dt in dt_vals:
    A = A0
    t = 0
    while A>A0/2:
        A -= dt * k_per_s * A
        t += dt
    print('%.3f\t%.3f' % (dt, t))
In [ ]:
# reversible reaction

import numpy as np
import matplotlib.pyplot as plt

A0 = 1
B0 = 0
k1_per_s = 0.6
k2_per_s = 0.2
t_end_s = 10
dt_s = 0.01

# express derivative as matrix * vector for simplicity
k_matrix = np.array([[-k1_per_s, k2_per_s], 
                     [k1_per_s, -k2_per_s]])
def dxdt(x):
    return np.dot(k_matrix, x) 

t_vals = np.arange(0, t_end_s, dt_s)
x = np.array([A0, B0])
x_vals = [x]

for t in t_vals[1:]:
    x_vals.append(x_vals[-1] + (dt_s * dxdt(x_vals[-1])))

x_vals = np.array(x_vals)

plt.plot(t_vals, x_vals[:, 0], label='A')
plt.plot(t_vals, x_vals[:, 1], label='B')
plt.legend()
plt.xlabel('Time [s]')
plt.ylabel('Concentration [a.u.]')
plt.show();