Difference between revisions of "The individual approach"

From Popix
Jump to navigation Jump to search
m (The data)
m (The data)
Line 469: Line 469:
 
|  24.0     ||      0.09
 
|  24.0     ||      0.09
 
|}
 
|}
 +
  
 
We are going to perform the analyses for this example with the free statistical software <span style="font-family:courier new; font-size:12pt">R</span>. First, we import the data and plot it to have a look:
 
We are going to perform the analyses for this example with the free statistical software <span style="font-family:courier new; font-size:12pt">R</span>. First, we import the data and plot it to have a look:
  
 
{| cellpadding="5" cellspacing="5"
 
{| cellpadding="5" cellspacing="5"
| style="width: 600px;" |  
+
| style="width: 500px;" |  
 
[[File:individual1.png]]
 
[[File:individual1.png]]
| style="width: 600px;" |
+
| style="width: 500px;" |
 
{{Rcode
 
{{Rcode
 
|name=
 
|name=

Revision as of 13:01, 19 April 2013

$ \DeclareMathOperator{\argmin}{arg\,min} \DeclareMathOperator{\argmax}{arg\,max} \newcommand{\psis}{\psi{^\star}} \newcommand{\phis}{\phi{^\star}} \newcommand{\hpsi}{\hat{\psi}} \newcommand{\hphi}{\hat{\phi}} \newcommand{\teps}{\varepsilon} \newcommand{\limite}[2]{\mathop{\longrightarrow}\limits_{\mathrm{#1}}^{\mathrm{#2}}} \newcommand{\DDt}[1]{\partial^2_\theta #1} \def\aref{a^\star} \def\kref{k^\star} \def\model{M} \def\hmodel{m} \def\mmodel{\mu} \def\imodel{H} \def\Imax{\text{\it Imax}} \def\id{ {\rm Id}} \def\teta{\tilde{\eta}} \newcommand{\eqdef}{\mathop{=}\limits^{\mathrm{def}}} \newcommand{\deriv}[1]{\frac{d}{dt}#1(t)} \newcommand{\pred}[1]{\tilde{#1}} \def\phis{\phi{^\star}} \def\hphi{\tilde{\phi}} \def\hw{\tilde{w}} \def\hpsi{\tilde{\psi}} \def\hatpsi{\hat{\psi}} \def\hatphi{\hat{\phi}} \def\psis{\psi{^\star}} \def\transy{u} \def\psipop{\psi_{\rm pop}} \newcommand{\psigr}[1]{\hat{\bpsi}_{#1}} \newcommand{\Vgr}[1]{\hat{V}_{#1}} \def\psig{\psi} \def\psigprime{\psig^{\prime}} \def\psigiprime{\psig_i^{\prime}} \def\psigk{\psig^{(k)}} \def\psigki{ {\psig_i^{(k)}}} \def\psigkun{\psig^{(k+1)}} \def\psigkuni{\psig_i^{(k+1)}} \def\psigi{ {\psig_i}} \def\psigil{ {\psig_{i,\ell}}} \def\phig{ {\phi}} \def\phigi{ {\phig_i}} \def\phigil{ {\phig_{i,\ell}}} \def\etagi{ {\eta_i}} \def\IIV{ {\Omega}} \def\thetag{ {\theta}} \def\thetagk{ {\theta_k}} \def\thetagkun{ {\theta_{k+1}}} \def\thetagkunm{\theta_{k-1}} \def\sgk{s_{k}} \def\sgkun{s_{k+1}} \def\yg{y} \def\xg{x} \def\neta{ {n_\eta}} \def\ncov{M} \def\npsi{n_\psig} \def\bu{\boldsymbol{u}} \def\bt{\boldsymbol{t}} \def\bT{\boldsymbol{T}} \def\by{\boldsymbol{y}} \def\bx{\boldsymbol{x}} \def\bc{\boldsymbol{c}} \def\bw{\boldsymbol{w}} \def\bz{\boldsymbol{z}} \def\bpsi{\boldsymbol{\psi}} \def\bbeta{\beta} \def\beeta{\eta} \def\logit{\rm logit} \def\transy{u} \def\so{O} \def\one{\mathbb 1} \newcommand{\prob}[1]{ \mathbb{P}\!\left(#1\right)} \newcommand{\probs}[2]{ \mathbb{P}_{#1}\!\left(#2\right)} \newcommand{\esp}[1]{\mathbb{E}\left(#1\right)} \newcommand{\esps}[2]{\mathbb{E}_{#1}\left(#2\right)} \newcommand{\var}[1]{\mbox{Var}\left(#1\right)} \newcommand{\vars}[2]{\mbox{Var}_{#1}\left(#2\right)} \newcommand{\std}[1]{\mbox{sd}\left(#1\right)} \newcommand{\stds}[2]{\mbox{sd}_{#1}\left(#2\right)} \newcommand{\corr}[1]{\mbox{Corr}\left(#1\right)} \def\pmacro{\mathbf{p}} \def\py{\pmacro} \def\pt{\pmacro} \def\pc{\pmacro} \def\pu{\pmacro} \def\pyi{\pmacro} \def\pyj{\pmacro} \def\ppsi{\pmacro} \def\ppsii{\pmacro} \def\pcpsith{\pmacro} \def\pth{\pmacro} \def\pypsi{\pmacro} \def\pcypsi{\pmacro} \def\ppsic{\pmacro} \def\pcpsic{\pmacro} \def\pypsic{\pmacro} \def\pypsit{\pmacro} \def\pcypsit{\pmacro} \def\pypsiu{\pmacro} \def\pcypsiu{\pmacro} \def\pypsith{\pmacro} \def\pypsithcut{\pmacro} \def\pypsithc{\pmacro} \def\pcypsiut{\pmacro} \def\pcpsithc{\pmacro} \def\pcthy{\pmacro} \def\pyth{\pmacro} \def\pcpsiy{\pmacro} \def\pz{\pmacro} \def\pw{\pmacro} \def\pcwz{\pmacro} \def\pw{\pmacro} \def\pcyipsii{\pmacro} \def\pyipsii{\pmacro} \def\pypsiij{\pmacro} \def\pyipsi1{\pmacro} \def\ptypsiij{\pmacro} \def\pcyzipsii{\pmacro} \def\pczipsii{\pmacro} \def\pcyizpsii{\pmacro} \def\pcyijzpsii{\pmacro} \def\pcyi1zpsii{\pmacro} \def\pcypsiz{\pmacro} \def\pccypsiz{\pmacro} \def\pypsiz{\pmacro} \def\pcpsiz{\pmacro} \def\peps{\pmacro} $

Overview

Before we start looking at modeling a whole population at the same time, we are going to consider only one individual from that population. Much of the basic methodology for modeling one individual follows through to population modeling. We will see that when stepping up from one individual to a population, the difference is that some parameters shared by individuals are considered to be drawn from a probability distribution.

Let us begin with a simple example. An individual receives 100mg of a drug at time $t=0$. At that time and then every hour for fifteen hours, the concentration of a marker in the bloodstream is measured and plotted against time:

Individual1.png

We aim to find a mathematical model to describe what we see in the figure. The eventual goal is then to extend this approach to the simultaneous modeling of a whole population.



Model and methods for the individual approach


Defining a model

In our example, the concentration is a continuous variable, so we will try to use continuous functions to model it. Different types of data (e.g., count data, categorical data, time-to-event data, etc.) require different types of models. All of these data types will be considered in due time, but for now let us concentrate on a continuous data model.

A model for continuous data can be represented mathematically as follows:

\( y_{j} = f(t_j ; \psi) + e_j, \quad \quad 1\leq j \leq n, \)

where:


  • $f$ is called the structural model. It corresponds to the basic type of curve we suspect the data is following, e.g., linear, logarithmic, exponential, etc. Sometimes, a model of the associated biological processes leads to equations that define the curve's shape.
  • $(t_1,t_2,\ldots , t_n)$ is the vector of observation times. Here, $t_1 = 0$ hours and $t_n = t_{16} = 15$ hours.
  • $\psi=(\psi_1, \psi_2, \ldots, \psi_d)$ is a vector of $d$ parameters that influences the value of $f$.
  • $(e_1, e_2, \ldots, e_n)$ are called the residual errors. Usually, we suppose that they come from some centered probability distribution: $\esp{e_j} =0$. Notice in the figure that even though the concentration is generally decreasing with time, sometimes it appears to increase (e.g., from $t= 7$ to $t=8$). This may be due to measurement errors rather than a true concentration increase, and is one reason we include the possibility of residual errors.


In fact, we usually state a continuous data model in a slightly more flexible way:

\( y_{j} = f(t_j ; \psi) + g(t_j ; \psi)\teps_j , \quad \quad 1\leq j \leq n, \)
(1.1)

where now:


  • $g$ is called the residual error model. It may be a function of the time $t_j$ and parameters $\psi$.
  • $(\teps_1, \teps_2, \ldots, \teps_n)$ are the normalized residual errors. We suppose that these come from a probability distribution which is centered and has unit variance: $\esp{\teps_j} = 0$ and $\var{\teps_j} =1$.



Choosing a residual error model

The choice of a residual error model $g$ is very flexible, and allows us to account for many different hypotheses we may have on the error's distribution. Let $f_j=f(t_j;\psi)$. Here are some simple error models,


  • Constant error model: $g=a$. That is, $y_j=f_j+a\teps_j$.
  • Proportional error model: $g=b\,f$. That is, $y_j=f_j+bf_j\teps_j$. This is for when we think the magnitude of the error is proportional to the value of the predicted value $f$.
  • Combined error model: $g=a+b f$. Here, $y_j=f_j+(a+bf_j)\teps_j$.
  • Alternative combined error model: $g^2=a^2+b^2f^2$. Here, $y_j=f_j+\sqrt{a^2+b^2f_j^2}\teps_j$.
  • Exponential error model: here, the model is instead $\log(y_j)=\log(f_j) + a\teps_j$, that is, $g=a$. It is exponential in the sense that if we exponentiate, we end up with $y_j = f_j e^{a\teps_j}$.



Tasks

To model a vector of observations $y = (y_j,\, 1\leq j \leq n$) we must perform several tasks:


  • Select a structural model $f$ and a residual error model $g$.
  • Estimate the model's parameters $\psi$.
  • Assess and validate the selected model.



Selecting structural and residual error models

As we are interested in parametric modeling, we must choose parametric structural and residual error models. In the absence of biological (or other) information, we might suggest possible structural models just by looking at the graphs of time-evolution of the data. For example, if $y_j$ is increasing with time, we might suggest an affine, quadratic or logarithmic model, depending on the approximate trend of the data. If $y_j$ is instead decreasing ever slower to zero, an exponential model might be appropriate.

However, often we have biological (or other) information to help us make our choice. For instance, if we have a system of differential equations describing how the drug is eliminated from the body, its solution may provide the formula (i.e., structural model) we are looking for.

As for the residual error model, if it is not immediately obvious which one to choose, several can be tested in conjunction with one or several possible structural models. After parameter estimation, each structural and residual error model pair can be assessed, compared against the others, and/or validated in various ways.

Now we can have a first look at parameter estimation, and further on, model assessment and validation.



Parameter estimation

Given the observed data and the choice of a parametric model to describe it, our goal becomes to find the "best" parameters for the model. A traditional framework to solve this kind of problem is called maximum likelihood estimation or MLE, in which the "most likely" parameters are found, given the data that was observed.

The likelihood $L$ is a function defined as:

\( L(\psi ; y_1,y_2,\ldots,y_n) \ \ \eqdef \ \ \py( y_1,y_2,\ldots,y_n; \psi) , \)

i.e., the conditional joint density function of $(y_j)$ given the parameters $\psi$, but looked at as if the data are known and the parameters not. The $\hat{\psi}$ which maximizes $L$ is known as the maximum likelihood estimator.

Suppose that we have chosen a structural model $f$ and residual error model $g$. If we assume for instance that $ \teps_j \sim_{i.i.d} {\cal N}(0,1)$, then the $y_j$ are independent of each other and (1.1) means that:

\( y_{j} \sim {\cal N}\left(f(t_j ; \psi) , g(t_j ; \psi)^2\right), \quad \quad 1\leq j \leq n .\)

Due to this independence, the pdf of $y = (y_1, y_2, \ldots, y_n)$ is the product of the pdfs of each $y_j$:

\(\begin{eqnarray} \py(y_1, y_2, \ldots y_n ; \psi) &=& \prod_{j=1}^n \pyj(y_j ; \psi) \\ \\ & = & \displaystyle{ \frac{1}{\prod_{j=1}^n \sqrt{2\pi} g(t_j ; \psi)} }\ e^{\displaystyle{ -\frac{1}{2} } \sum_{j=1}^n \left( \frac{y_j - f(t_j ; \psi)}{g(t_j ; \psi)} \right)^2 } . \end{eqnarray}\)

This is the same thing as the likelihood function $L$ when seen as a function of $\psi$. Maximizing $L$ is equivalent to minimizing the deviance, i.e., -2 $\times$ the $\log$-likelihood ($LL$):

\(\begin{eqnarray} \hat{\psi} &=& \argmin{\psi} \left\{ -2 \,LL \right\}\\ &=& \argmin{\psi} \left\{ \sum_{j=1}^n \log\left(g(t_j ; \psi)^2\right) + \sum_{j=1}^n \left(\displaystyle{ \frac{y_j - f(t_j ; \psi)}{g(t_j ; \psi)} }\right)^2 \right\} . \end{eqnarray}\)
(1.2)


This minimization problem does not usually have an analytical solution for nonlinear models, so an optimization procedure needs to be used. However, for a few specific models, analytic solutions do exist.

For instance, suppose we have a constant error model: $y_{j} = f(t_j ; \psi) + a \, \teps_j,\,\, 1\leq j \leq n,$ that is: $g(t_j;\psi) = a$. In practice, $f$ is not itself a function of $a$, so we can write $\psi = (\phi,a)$ and therefore: $y_{j} = f(t_j ; \phi) + a \, \teps_j.$ Thus, (1.2) simplifies to:

\( (\hat{\phi},\hat{a}) \ \ = \ \ \argmin{(\phi,a)} \left\{ n \log(a^2) + \sum_{j=1}^n \left(\displaystyle{ \frac{y_j - f(t_j ; \phi)}{a} }\right)^2 \right\} . \)

The solution is then:

\(\begin{eqnarray} \hat{\phi} &=& \argmin{\phi} \sum_{j=1}^n \left( y_j - f(t_j ; \phi)\right)^2 \\ \hat{a}^2&=& \frac{1}{n}\sum_{j=1}^n \left( y_j - f(t_j ; \hat{\phi})\right)^2 , \end{eqnarray} \)

where $\hat{a}^2$ is found by setting the partial derivative of $-2LL$ to zero.

Whether this has an analytical solution or not depends on the form of $f$. For example, if $f(t_j;\phi)$ is just a linear function of the components of the vector $\phi$, we can represent it as a matrix $F$ whose $j$th row gives the coefficients at time $t_j$. Therefore, we have the matrix equation $y = F \phi + a \teps$.

The solution for $\hat{\phi}$ is thus the least-squares one, and for $\hat{a}^2$ it is the same as before:

\(\begin{eqnarray} \hat{\phi} &=& (F^\prime F)^{-1} F^\prime y \\ \hat{a}^2&=& \frac{1}{n}\sum_{j=1}^n \left( y_j - F_j \hat{\phi}\right)^2 . \\ \end{eqnarray}\)



Computing the Fisher information matrix

The Fisher information is a way of measuring the amount of information that an observable random variable carries about an unknown parameter upon which its probability distribution depends.

Let $\psis $ be the true unknown value of $\psi$, and let $\hatpsi$ be the maximum likelihood estimate of $\psi$. If the observed likelihood function is sufficiently smooth, asymptotic theory for maximum-likelihood estimation holds and

\( I_n(\psis)^{\frac{1}{2} }(\hatpsi-\psis) \limite{n\to \infty}{} {\mathcal N}(0,\id) , \)
(1.3)

where

\(I_n(\psis)=- \displaystyle{ \frac{\partial}{\partial \psi} } LL(\psis;y_1,y_2,\ldots,y_n) \)

is the observed Fisher information matrix. Here, "observed" means that it is a function of observed variables $y_1,y_2,\ldots,y_n$.

Thus, an estimate of the covariance of $\hatpsi$ is the inverse of the observed Fisher information matrix as expressed by the formula:

\(C(\hatpsi) = - I_n(\hatpsi)^{-1} . \)



Deriving confidence intervals for parameters

Let $\psi_k$ be the $k$th of $d$ components of $\psi$. Imagine that we have estimated $\psi_k$ with $\hatpsi_k$, the $k$th component of the MLE $\hatpsi$, that is, a random variable that converges to $\psi_k^{\star}$ when $n \to \infty$ under very general conditions.

An estimator of its variance is the $k$th element of the diagonal of the covariance matrix $C(\hatpsi)$:

\(\widehat{\rm Var}(\hatpsi_k) = C_{kk}(\hatpsi) .\)

We can thus derive an estimator of its standard error:

\(\widehat{\rm s.e.}(\hatpsi_k) = \sqrt{C_{kk}(\hatpsi)} ,\)

and a confidence interval of level $1-\alpha$ for $\psi_k^\star$:

\({\rm CI}(\psi_k^\star) = \left[\hatpsi_k + \widehat{\rm s.e.}(\hatpsi_k)\,q\left(\frac{\alpha}{2}\right), \ \hatpsi_k + \widehat{\rm s.e.}(\hatpsi_k)\,q\left(1-\frac{\alpha}{2}\right)\right] , \)

where $q(w)$ is the quantile of order $w$ of a ${\cal N}(0,1)$ distribution.


Remarks

Approximating the fraction $\hatpsi/\widehat{\rm s.e}(\hatpsi_k)$ by the normal distribution is a "good" approximation only when the number of observations $n$ is large. A better approximation should be used for small $n$. In the model $y_j = f(t_j ; \phi) + a\teps_j$, the distribution of $\hat{a}^2$ can be approximated by a chi-square distribution with $(n-d_\phi)$ degrees of freedom, where $d_\phi$ is the dimension of $\phi$. The quantiles of the normal distribution can then be replaced by those of a Student's $t$-distribution with $(n-d_\phi)$ degrees of freedom.



Deriving confidence intervals for predictions

The structural model $f$ can be predicted for any $t$ using the estimated value $f(t; \hatphi)$. For that $t$, we can then derive a confidence interval for $f(t,\phi)$ using the estimated variance of $\hatphi$. Indeed, as a first approximation we have:

\( f(t ; \hatphi) \simeq f(t ; \phis) + \nabla f (t,\phis) (\hatphi - \phis) ,\)

where $\nabla f(t,\phis)$ is the gradient of $f$ at $\phis$, i.e., the vector of the first-order partial derivatives of $f$ with respect to the components of $\phi$, evaluated at $\phis$. Of course, we do not actually know $\phis$, but we can estimate $\nabla f(t,\phis)$ with $\nabla f(t,\hatphi)$. The variance of $f(t ; \hatphi)$ can then be estimated by

\( \widehat{\rm Var}\left(f(t ; \hatphi)\right) \simeq \nabla f (t,\hatphi)\widehat{\rm Var}(\hatphi) \left(\nabla f (t,\hatphi) \right)^\prime . \)

We can then derive an estimate of the standard error of $f (t,\hatphi)$ for any $t$:

\(\widehat{\rm s.e.}(f(t ; \hatphi)) = \sqrt{\widehat{\rm Var}\left(f(t ; \hatphi)\right)} , \)

and a confidence interval of level $1-\alpha$ for $f(t ; \phi^\star)$:

\({\rm CI}(f(t ; \phi^\star)) = \left[f(t ; \hatphi) + \widehat{\rm s.e.}(f(t ; \hatphi))\,q\left(\frac{\alpha}{2}\right), \ f(t ; \hatphi) + \widehat{\rm s.e.}(f(t ; \hatphi))\,q\left(1-\frac{\alpha}{2}\right)\right].\)



Estimating confidence intervals using Monte Carlo simulation

The use of Monte Carlo methods to estimate a distribution does not require any approximation of the model.

We proceed in the following way. Suppose we have found a MLE $\hatpsi$ of $\psi$. We then simulate a data vector $y^{(1)}$ by first randomly generating the vector $\teps^{(1)}$ and then calculating for $1 \leq j \leq n$,

\( y^{(1)}_j = f(t_j ;\hatpsi) + g(t_j ;\hatpsi)\teps^{(1)}_j . \)

In a sense, this gives us an example of "new" data from the "same" model. We can then compute a new MLE $\hat{\psi}^{(1)}$ of $\psi$ using $y^{(1)}$.

Repeating this process $M$ times gives $M$ estimates of $\psi$ from which we can obtain an empirical estimation of the distribution of $\hatpsi$, or any quantile we like.

Any confidence interval for $\psi_k$ (resp. $f(t,\psi_k)$) can then be approximated by a prediction interval for $\hatpsi_k$ (resp. $f(t,\hatpsi_k)$). For instance, a two-sided confidence interval of level $1-\alpha$ for $\psi_k^\star$ can be estimated by the prediction interval

\( [\hat{\psi}_{k,([\frac{\alpha}{2} M])} \ , \ \hat{\psi}_{k,([ (1-\frac{\alpha}{2})M])} ], \)

where $[\cdot]$ denotes the integer part and $(\psi_{k,(m)},\ 1 \leq m \leq M)$ the order statistic, i.e., the parameters $(\hatpsi_k^{(m)}, 1 \leq m \leq M)$ reordered so that $\hatpsi_{k,(1)} \leq \hatpsi_{k,(2)} \leq \ldots \leq \hatpsi_{k,(M)}$.




A PK example

In the real world, it is often not enough to look at the data, choose one possible model and estimate the parameters. The chosen structural model may or may not be "good" at representing the data. It may be good but the chosen residual error model bad, meaning that the overall model is poor, and so on. That is why in practice we may want to try out several structural and residual error models. After performing parameter estimation for each model, various assessment tasks can then be performed in order to conclude which model is best.



The data

This modeling process is illustrated in detail in the following PK example. Let us consider a dose D=50mg of a drug administrated orally to a patient at time $t=0$. The concentration of the drug in the bloodstream is then measured at times $(t_j) = (0.5, 1,\,1.5,\,2,\,3,\,4,\,8,\,10,\,12,\,16,\,20,\,24).$

Here is the file individualFitting_data.txt with the data:


time concentration
0.5 0.94
1.0 1.30
1.5 1.64
2.0 3.38
3.0 3.72
4.0 3.29
8.0 1.31
10.0 0.80
12.0 0.39
16.0 0.31
20.0 0.10
24.0 0.09


We are going to perform the analyses for this example with the free statistical software R. First, we import the data and plot it to have a look:

Individual1.png

Rstudio.png
R


pk1=read.table("individualFitting_data.txt",header=T) 
t=pk1$time  
y=pk1$concentration
plot(t,y,xlab="time(hour)",ylab="concentration(mg/l)",col="blue")   



Fitting two PK models

We are going to consider two possible structural models that may describe the observed time-course of the concentration:


  • A one compartment model with first-order absorption and linear elimination:

\(\begin{eqnarray} \phi_1 &=& (k_a, V, k_e) \\ f_1(t ; \phi_1) &=& \frac{D\, k_a}{V(k_a-k_e)} \left( e^{-k_e \, t} - e^{-k_a \, t} \right). \end{eqnarray}\)


  • A one compartment model with zero-order absorption and linear elimination:

\(\begin{eqnarray} \phi_2 &=& (T_{k0}, V, k_e) \\ f_2(t ; \phi_2) &=& \left\{ \begin{array}{ll} \displaystyle{ \frac{D}{V \,T_{k0} \, k_e} }\left( 1- e^{-k_e \, t} \right) & {\rm if }\ t\leq T_{k0} \\ \displaystyle{ \frac{D}{V \,T_{k0} \, k_e} } \left( 1- e^{-k_e \, T_{k0} } \right)e^{-k_e \, (t- T_{k0})} & {\rm otherwise} . \end{array} \right. \end{eqnarray}\)


We define each of these functions in R:


Rstudio.png
R


predc1=function(t,x){
f=50*x[1]/x[2]/(x[1]-x[3])*(exp(-x[3]*t)-exp(-x[1]*t)) }

predc2=function(t,x){
ff=50/x[1]/x[2]/x[3]*(1-exp(-x[3]*t))
ff[t>x[1]]=50/x[1]/x[2]/x[3]*(1-exp(-x[3]*x[1]))*exp(-x[3]*(t[t>x[1]]-x[1]))
f=ff} 


We then define two models ${\cal M}_1$ and ${\cal M}_2$ that assume (for now) constant residual error models:

\(\begin{eqnarray} {\cal M}_1 : \quad y_j & = & f_1(t_j ; \phi_1) + a_1\teps_j \\ {\cal M}_2 : \quad y_j & = & f_2(t_j ; \phi_2) + a_2\teps_j . \end{eqnarray}\)

We can fit these two models to our data by computing the MLE $\hatpsi_1=(\hatphi_1,\hat{a}_1)$ and $\hatpsi_2=(\hatphi_2,\hat{a}_2)$ of $\psi$ under each model:


Rstudio.png
R


fmin1=function(x,y,t)
{f=predc1(t,x)
g=x[4]
e=sum( ((y-f)/g)^2 + log(g^2))
}

fmin2=function(x,y,t)
{f=predc2(t,x)
g=x[4]
e=sum( ((y-f)/g)^2 + log(g^2))
}

#--------- MLE --------------------------------

pk.nlm1=nlm(fmin1, c(0.3,6,0.2,1), y, t, hessian="true")
psi1=pk.nlm1$estimate

pk.nlm2=nlm(fmin2, c(3,10,0.2,4), y, t, hessian="true")
psi2=pk.nlm2$estimate



Here are the parameter estimation results:

> cat(" psi1 =",psi1,"\n\n")
 psi1 = 0.3240916 6.001204 0.3239337 0.4366948

> cat(" psi2 =",psi2,"\n\n")
 psi2 = 3.203111 8.999746 0.229977 0.2555242




Assessing and selecting the PK model

The estimated parameters $\hatphi_1$ and $\hatphi_2$ can then be used for computing the predicted concentrations $\hat{f}_1(t)$ and $\hat{f}_2(t)$ under both models at any time $t$. These curves can then be plotted over the original data and compared:


Individual2.png
Rstudio.png
R


tc=seq(from=0,to=25,by=0.1)
fc1=predc1(tc,phi1)
fc2=predc2(tc,phi2)

plot(t,y,ylim=c(0,4.1),xlab="time (hour)",ylab="concentration (mg/l)",col = "blue")
lines(tc,fc1, type = "l", col = "green", lwd=2)
lines(tc,fc2, type = "l", col = "red", lwd=2)
abline(a=0,b=0,lty=2)
legend(13,4,c("observations","first order absorption","zero order absorption"),
lty=c(-1,1,1), pch=c(1,-1,-1), lwd=2, col=c("blue","green","red"))


We clearly see that a much better fit is obtained with model ${\cal M}_2$, i.e., the one assuming a zero-order absorption process.


Another useful goodness-of-fit plot is obtained by displaying the observations $(y_j)$ versus the predictions $\hat{y}_j=f(t_j ; \hatpsi)$ given by the models:


Individual3.png
Rstudio.png
R


f1=predc1(t,phi1)
f2=predc2(t,phi2)

par(mfrow= c(1,2))
plot(f1,y,xlim=c(0,4),ylim=c(0,4), main="model 1")
abline(a=0,b=1,lty=1)
plot(f2,y,xlim=c(0,4),ylim=c(0,4), main="model 2")
abline(a=0,b=1,lty=1)



Model selection

Again, ${\cal M}_2$ would seem to have a slight edge. This can be tested more analytically using the Bayesian Information Criteria (BIC):


Rstudio.png
R


deviance1=pk.nlm1$minimum + n*log(2*pi)
bic1=deviance1+log(n)*length(psi1)
deviance2=pk.nlm2$minimum + n*log(2*pi)
bic2=deviance2+log(n)*length(psi2)
> cat(" bic1 =",bic1,"\n\n")
 bic1 = 24.10972

> cat(" bic2 =",bic2,"\n\n")
 bic2 = 11.24769

A smaller BIC is better. Therefore, this also suggests that model ${\cal M}_2$ should be selected.



Fitting different error models

For the moment, we have only considered constant error models. However, the ``observations vs predictions figure seen earlier hints that the amplitude of the residual errors may increase with the size of the predicted value. Let us therefore take a closer look at four different residual error models, each of which we will associate with the ``best structural model $f_2$:

${\cal M}_2$ Constant error model: $y_j=f_2(t_j;\phi_2)+a_2\teps_j$
${\cal M}_3$ Proportional error model: $y_j=f_2(t_j;\phi_3)+b_3f_2(t_j;\phi_3)\teps_j$
${\cal M}_4$ Combined error model: $y_j=f_2(t_j;\phi_4)+(a_4+b_4f_2(t_j;\phi_4))\teps_j$
${\cal M}_5$ Exponential error model: $\log(y_j)=\log(f_2(t_j;\phi_5)) + a_5\teps_j$.


The three new ones need to be entered into R:

Rstudio.png
R


fmin3=function(x,y,t)
{f=predc2(t,x)
g=x[4]*f
e=sum( ((y-f)/g)^2 + log(g^2))
}

fmin4=function(x,y,t)
{f=predc2(t,x)
g=abs(x[4])+abs(x[5])*f
e=sum( ((y-f)/g)^2 + log(g^2))
}

fmin5=function(x,y,t)
{f=predc2(t,x)
g=x[4]
e=sum( ((log(y)-log(f))/g)^2 + log(g^2))
}


We can now compute the MLE $\hatpsi_3=(\hatphi_3,\hat{b}_3)$, $\hatpsi_4=(\hatphi_4,\hat{a}_4,\hat{b}_4)$ and $\hatpsi_5=(\hatphi_5,\hat{a}_5)$ of $\psi$ under models ${\cal M}_3$, ${\cal M}_4$ and ${\cal M}_5$:


Rstudio.png
R


fmin3=function(x,y,t)
{f=predc2(t,x)
g=x[4]*f
e=sum( ((y-f)/g)^2 + log(g^2))
}

fmin4=function(x,y,t)
{f=predc2(t,x)
g=abs(x[4])+abs(x[5])*f
e=sum( ((y-f)/g)^2 + log(g^2))
}

fmin5=function(x,y,t)
{f=predc2(t,x)
g=x[4]
e=sum( ((log(y)-log(f))/g)^2 + log(g^2))
}

#----------------  MLE  -------------------

pk.nlm3=nlm(fmin3, c(phi2,0.1), y, t, hessian="true")
psi3=pk.nlm3$estimate

pk.nlm4=nlm(fmin4, c(phi2,1,0.1), y, t, hessian="true")
psi4=pk.nlm4$estimate
psi4[c(4,5)]=abs(psi4[c(4,5)])

pk.nlm5=nlm(fmin5, c(phi2,0.1), y, t, hessian="true")
psi5=pk.nlm5$estimate  
> cat(" psi3 =",psi3,"\n\n")
 psi3 = 2.642409 11.44113 0.1838779 0.2189221

> cat(" psi4 =",psi4,"\n\n")
 psi4 = 2.890066 10.16836 0.2068221 0.02741416 0.1456332

> cat(" psi5 =",psi5,"\n\n")
 psi5 = 2.710984 11.2744 0.188901 0.2310001



Selecting the error model

As before, these curves can be plotted over the original data and compared:


Individual4.png
Rstudio.png
R


tc=seq(from=0,to=25,by=0.1)
fc1=predc1(tc,phi1)
fc2=predc2(tc,phi2)

plot(t,y,ylim=c(0,4.1),xlab="time (hour)",ylab="concentration (mg/l)",col = "blue")
lines(tc,fc1, type = "l", col = "green", lwd=2)
lines(tc,fc2, type = "l", col = "red", lwd=2)
abline(a=0,b=0,lty=2)
legend(13,4,c("observations","first order absorption","zero order absorption"),
lty=c(-1,1,1), pch=c(1,-1,-1), lwd=2, col=c("blue","green","red"))

As you can see, the three predicted concentrations obtained with models ${\cal M}_3$, ${\cal M}_4$ and ${\cal M}_5$ are quite similar. We now calculate the BIC for each:

Rstudio.png
R


deviance3=pk.nlm3$minimum + n*log(2*pi)
bic3=deviance3 + log(n)*length(psi3)
deviance4=pk.nlm4$minimum + n*log(2*pi)
bic4=deviance4 + log(n)*length(psi4)
deviance5=pk.nlm5$minimum + 2*sum(log(y)) + n*log(2*pi)
bic5=deviance5 + log(n)*length(psi5)
> cat(" bic3 =",bic3,"\n\n")
 bic3 = 3.443607

> cat(" bic4 =",bic4,"\n\n")
 bic4 = 3.475841

> cat(" bic5 =",bic5,"\n\n")
 bic5 = 4.108521

All of these BIC are lower than the constant residual error one. BIC selects the residual error model ${\cal M}_3$ with a proportional component.

There is not a large difference between these three error models, though the proportional and combined error models give the smallest and essentially identical BIC. We decide to use the combined error model ${\cal M}_4$ in the following (the same types of analysis could be done with the proportional error model).

A 90% confidence interval for $\psi_4$ can derived from the Hessian (i.e., the square matrix of second-order partial derivatives) of the objective function (i.e. -2 $\times \ LL$):


Rstudio.png
R


ialpha=0.9
df=n-length(phi4)
I4=pk.nlm4$hessian/2
H4=solve(I4)
s4=sqrt(diag(H4)*n/df)
delta4=s4*qt(0.5+ialpha/2,df)
ci4=matrix(c(psi4-delta4,psi4+delta4),ncol=2)
> ci4
            [,1]        [,2]
[1,]  2.22576690  3.55436561
[2,]  7.93442421 12.40228967
[3,]  0.16628224  0.24736196
[4,] -0.02444571  0.07927403
[5,]  0.04119983  0.25006660

We can also calculate a $90%$ confidence interval for $f_4(t)$ using the Central Limit Theorem (see \eqref{intro_individualCLT}):

Rstudio.png
R


nlpredci=function(phi,f,H)
{
dphi=length(phi)
nf=length(f)
H=H*n/(n-dphi)
S=H[seq(1,dphi),seq(1,dphi)]
G=matrix(nrow=nf,ncol=dphi)
for (k in seq(1,dphi)) {
   dk=phi[k]*(1e-5)
   phid=phi
   phid[k]=phi[k] + dk
   fd=predc2(tc,phid)
   G[,k]=(f-fd)/dk
}
M=rowSums((G%*%S)*G)
deltaf=sqrt(M)*qt(0.5+ialpha/2,df)
}

deltafc4=nlpredci(phi4,fc4,H4)


This can then be plotted:

Individual6.png
Rstudio.png
R


plot(t,y,ylim=c(0,4.5),xlab="time (hour)",ylab="concentration (mg/l)",col = "blue")
lines(tc,fc4, type = "l", col = "red", lwd=2)
lines(tc,fc4-deltafc4, type = "l", col = "red", lwd=1, lty=3)
lines(tc,fc4+deltafc4, type = "l", col = "red", lwd=1, lty=3)
abline(a=0,b=0,lty=2)
legend(10.5,4.5,c("observed concentrations","predicted concentration",
       "CI for predicted concentration"),
lty=c(-1,1,3), pch=c(1,-1,-1), lwd=c(2,2,1), col=c("blue","red","red"))


Alternatively, prediction intervals for $\hatpsi_4$, $\hat{f}_4(t;\hatpsi_4)$ and new observations for any time $t$ can be estimated by Monte Carlo simulation:


Rstudio.png
R


f=predc2(t,phi4)
a4=psi4[4]
b4=psi4[5]
g=a4+b4*f
dpsi=length(psi4)
nc=length(tc)
N=1000
qalpha=c(0.5 - alpha/2,0.5 + alpha/2)
PSI=matrix(nrow=N,ncol=dpsi)
FC=matrix(nrow=N,ncol=nc)
Y=matrix(nrow=N,ncol=nc)
for (k in seq(1,N)) {
   eps=rnorm(n)
   ys=f+g*eps
   pk.nlm=nlm(fmin4, psi4, ys, t)
   psie=pk.nlm$estimate
   psie[c(4,5)]=abs(psie[c(4,5)])
   PSI[k,]=psie
   fce=predc2(tc,psie[c(1,2,3)])
   FC[k,]=fce
   gce=a4+b4*fce
   Y[k,]=fce + gce*rnorm(1)
}

ci4s=matrix(nrow=dpsi,ncol=2)
for (k in seq(1,dpsi)){
   ci4s[k,]=quantile(PSI[,k],qalpha,names=FALSE)
}
m4s=colMeans(PSI)
sd4s=apply(PSI,2,sd)

cifc4s=matrix(nrow=nc,ncol=2)
for (k in seq(1,nc)){
   cifc4s[k,]=quantile(FC[,k],qalpha,names=FALSE)
}

ciy4s=matrix(nrow=nc,ncol=2)
for (k in seq(1,nc)){
   ciy4s[k,]=quantile(Y[,k],qalpha,names=FALSE)
}

par(mfrow= c(1,1))
plot(t,y,ylim=c(0,4.5),xlab="time (hour)",ylab="concentration (mg/l)",col = "blue")
lines(tc,fc4, type = "l", col = "red", lwd=2)
lines(tc,cifc4s[,1], type = "l", col = "red", lwd=1, lty=3)
lines(tc,cifc4s[,2], type = "l", col = "red", lwd=1, lty=3)
lines(tc,ciy4s[,1], type = "l", col = "green", lwd=1, lty=3)
lines(tc,ciy4s[,2], type = "l", col = "green", lwd=1, lty=3)
abline(a=0,b=0,lty=2)
legend(10.5,4.5,c("observed concentrations","predicted concentration",
"CI for predicted concentration", "CI for observed concentrations"), lty=c(-1,1,3,3),
pch=c(1,-1,-1,-1), lwd=c(2,2,1,1), col=c("blue","red","red","green"))


Individual7.png
> ci4s
             [,1]        [,2]
[1,] 2.350653e+00  3.53526320
[2,] 8.350764e+00 12.04910579
[3,] 1.818431e-01  0.24156832
[4,] 5.445459e-09  0.08819339
[5,] 1.563625e-02  0.19638889
}}
|}


<br>
==Bibliography==























==Models and methods==





===Model for continuous data===


 
A model for continuous data can be described by one of the two equations (according to the definition of a residual error model joint to the model): 


::<div style="text-align: left;font-size: 11pt">\(\begin{align}
y_{j} &= f(t_j ; \psi) + \varepsilon_j \quad ; \quad  1\leq j \leq n 
\\ 
&= f(t_j ; \psi) + g(t_j ; \psi) \tilde{\varepsilon_j} 
\end{align}\)</div>


where:


* $f$ corresponds to the structural model
* $\psi=(\psi_1, \psi_2, \ldots, \psi_d)$  is the vector of parameters
* $(t_1,t_2,\ldots , t_n)$ is the vector of the observation times
* $(\varepsilon_j, \varepsilon_2, \ldots, \varepsilon_n)$  are the residual errors ($\Epsilon({\varepsilon_j}) =0$)
* $g$ indicates the residual error model
* $(\tilde{\varepsilon_1}, \tilde{\varepsilon_2}, \ldots, \tilde{\varepsilon_n})$ are the normalized residual errors $(Var(\tilde{\varepsilon_j}) =1)$



There are several residual error models; each model is represented by a specific formula and depends by different parameters. Hereafter there is a list of the main residual error models with the corresponding formulas:


{| {| align=left; style="width: 600px" cellpadding="8" cellspacing="0"
| - constant error model ||  $y=f+a\tilde{\varepsilon}$, || $g=a$  
|-
| - proportional error model || $y=f+bf\tilde{\varepsilon}$, || $g=b\, f$ 
|-
| - combined error model 1 || $y=f+(a+bf)\tilde{\varepsilon}$, || $g=a+b f$ 
|-
| - combined error model 2 || $y=f+\sqrt{a^2+b^2f^2}\tilde{\varepsilon}$, || $g^2=a^2+b^2f^2$ 
|-
|- exponential error model || $\log(y)=\log(f) + a\tilde{\varepsilon}$ || $g=a$
|}



Modelling a vector of observations $(y_j)$ requires several tasks. First of all we have to estimate the vector of parameters $\psi$ for a given model. Then we need to select the structural model $f$ and the residual error model $g$ to apply to the observations. Finally the selected model should be assessed and validated.
<br><br><br>

===Maximum likelihood estimation===


The maximum likelihood estimation of the parameters can be computed using the formula: $\hat{\psi}$ maximizes $L(\psi ; y_1,y_2,\ldots,y_j)$, where


::<div style="text-align: left;font-size: 11pt">\(
L(\psi ; y_1,y_2,\ldots,y_j) {\overset{def}{=}}  p_Y( y_1,y_2,\ldots,y_j ; \psi)
\)</div>


If we assume that $\bar{\varepsilon_i} \sim_{i.i.d} {\cal N}(0,1)$, then, the $y_i$'s are independent and


::<div style="text-align: left;font-size: 11pt">\(
y_{j} \sim {\cal N}(f(t_j ; \psi) , g(t_j ; \psi)^2)
\)</div>


As a consequence, the ''p.d.f'' of $(y_1, y_2, \ldots y_n)$ can  be computed via the equations:


::<div style="text-align: left;font-size: 11pt">\(\begin{align}
p_Y(y_1, y_2, \ldots y_n ; \psi) &= \prod_{j=1}^n p_{Y_j}(y_j ; \psi) \\ \\
&=  \frac{e^{- \displaystyle{\frac{1}{2}} \sum_{j=1}^n \left( \frac{y_j - f(t_j ; \psi)}{g(t_j ; \psi)} \right)^2}}{\prod_{j=1}^n \sqrt{2\pi g(t_j ; \psi)}}
\end{align}\)</div>



Maximizing the likelihood is equivalent to minimizing the deviance (-2 $\times$ log-likelihood) which plays here the role of the objective function:

::<div style="text-align: left;font-size: 11pt">\(\begin{align}
\hat{\psi} = \argmin_{\psi} \left\{  \sum_{j=1}^n \log(g(t_j ; \psi)^2) + \sum_{j=1}^n \left( \frac{y_j - f(t_j ; \psi)}{g(t_j ; \psi) }\right)^2 \right \} 
\end{align}\)</div>


and the deviance is therefore


::<div style="text-align: left;font-size: 11pt">\(\begin{align}
-2 LL(\hat{\psi} ; y_1,y_2,\ldots,y_j) = \sum_{j=1}^n \log(g(t_j ; \hat{\psi})^2)  + \sum_{j=1}^n \left(\frac{y_j - f(t_j ; \hat{\psi})}{g(t_j ; \hat{\psi})}\right)^2 +n\log(2\pi)
\end{align}\)</div>


This minimization problem usually does not have an analytical solution for a non linear model and the optimization procedures should be used.
However, some specific models have specific solutions. For instance, in case of a ''constant error model'' described by the generic equation:

::<div style="text-align: left;font-size: 12pt">\( y_{j} = f(t_j ; \phi)  + a \, \bar{\varepsilon_j}\)</div>


we have

::<div style="text-align: left;font-size: 11pt">\(\begin{align}
\hat{\phi} &= \argmin_{\psi}  \sum_{j=1}^n \left( y_j - f(t_j ; \phi)\right)^2 \\ \\
\hat{a}&= \frac{1}{n}\sum_{j=1}^n \left( y_j - f(t_j ; \hat{\phi})\right)^2 \\ \\
-2 LL(\hat{\psi} ; y_1,y_2,\ldots,y_j) &=  \sum_{j=1}^n \log(\hat{a}^2) + n +n\log(2\pi)
\end{align}\)</div>



In case of a ''linear error model'' represented by

::<div style="text-align: left;font-size: 11pt">\(
y_{j} = F_j \, \phi  + a \, \bar{\varepsilon_j} \quad ; \quad  1\leq j \leq n 
\)</div>


Let

::<div style="text-align: left;font-size: 11pt">\(
F = \left(\begin{array}{c} F_1 \\ F_2 \\ \vdots \\F_n \\ \end{array} \right)
\quad ; \quad
y = \left(\begin{array}{c} y_1 \\ y_2 \\ \vdots \\y_n \\ \end{array} \right)   
\)</div>


Then the solution is: 


::<div style="text-align: left;font-size: 11pt">\(\begin{align}
\hat{\phi} &= (F^\prime F)^{-1} F^\prime y \\
\hat{a}&= \frac{1}{n}\sum_{j=1}^n \left( y_j - F \hat{\phi})\right)^2
\end{align}\)</div>
<br><br><br>

===Computing the Fisher Information Matrix===

Let $\psis$ be the true unknown value of $\psi$, and let $\hat{\psi}$ be the maximum likelihood estimate of $\psi$.
If the observed likelihood function is sufficiently smooth, asymptotic theory for maximum-likelihood estimation holds and

::<div style="text-align: left;font-size: 11pt">\(
I_n(\psis)^{\frac{1}{2}}(\hat{\psi}-\psi{^\star}) \limite{n\to \infty}{} {\mathcal N}(0,{\rm Id})
\)</div>


where

::<div style="text-align: left;font-size: 11pt">\(
I_n(\psi{^\star})= - \DDt LL(\psis;y_1,y_2,\ldots,y_n)
\)</div>


is the observed Fisher information matrix. Thus, an estimate of the covariance of $\hpsi$ is the inverse of the observed Fisher information matrix as expressed by the formula:

::<div style="text-align: left;font-size: 11pt">\(
C(\hpsi) = \left(- \DDt LL(\hpsi ; y_1,y_2,\ldots,y_n) \right)^{-1}
\)</div>




===Deriving confidence intervals for the parameters===

Let $\psi_k$ be the $k$th component of $\psi$, with $k=1,2,\ldots,d$.
$\psi_k$ is estimated by $\hpsi_k$, the $k$th component of $\hpsi$, that is a random variable that converges to $\psi_k^{\star}$ when $n \to \infty$.

An estimator of its variance is the $k$th element of the diagonal of the covariance matrix $C(\hpsi)$:


::<div style="text-align: left;font-size: 11pt">\(
\widehat{\rm Var}(\hpsi_k) = C_{kk}(\hpsi)
\)</div>


We can then derive an estimator of its standard error


::<div style="text-align: left;font-size: 11pt">\(
\widehat{\rm s.e}(\hpsi_k) = \sqrt{C_{kk}(\hpsi)}
\)</div>


and a confidence interval for $\psi_k^\star$ constructed at a confidence level $\alpha$:


::<div style="text-align: left;font-size: 11pt">\(
{\rm CI}(\psi_k) = [\hpsi_k + \widehat{\rm s.e}(\hpsi_k)q((1-\alpha)/2) , \hpsi_k + \widehat{\rm s.e}(\hpsi_k)q((1+\alpha)/2)]
\)</div>


where $q(\alpha)$ is the quantile of order $\alpha$ of a ${\cal N}(0,1)$ distribution.


<u>''Remarks:''</u> <br>
:The normal distribution for $\hpsi/\widehat{\rm s.e}(\hpsi_k)$ is a "good" approximation only when the number of observations $n$ is large.
:Better approximation should be used for small $n$.
:In the model $y_j = f(t_j ; \phi) + a\teps_j$, the distribution of $\hat{a}^2$ can be approximated by a chi-square distribution with $(n-d_\phi)$ degrees of freedom, :where $d_\phi$ is the size of $\phi$.
:The quantiles of the normal distribution can then be replaced by the quantiles of a $t$-distribution with $(n-d_\phi)$ df.
<!--$${\rm CI}(\psi_k) = [\hpsi_k - \widehat{\rm s.e}(\hpsi_k)q((1-\alpha)/2,n-d) , \hpsi_k + \widehat{\rm s.e}(\hpsi_k)q((1+\alpha)/2,n-d)]$$-->
<!--where $q(\alpha,\nu)$ is the quantile of order $\alpha$ of a $t$-distribution with $\nu$ degrees of freedom.-->

===Deriving confidence intervals for the predictions===

The regression model (or structural model) can be predicted for any $t$ using the following expression:


::<div style="text-align: left;font-size: 11pt">\(
\hat{f}(t,\phi) = f(t ; \hphi)
\)</div>


It is then possible for any $t$ to derive a confidence interval for $f(t,\phi)$ using the estimated variance of $\hphi$.
Indeed, as a first approximation, we have:

<!-- ::<div style="text-align: left;font-size: 11pt">\( -->
<!-- f(t ; \hphi) \simeq f(t ; \phi) + J_f(\phi) (\hphi - \phi) -->
<!-- \)</div> -->
<!-- where $J_f(t,\phi)$ is the Jacobian matrix of $f$, ''i.e.'' the matrix of all first-order partial derivatives of $f(t,\phi)$ with respect to the components of $\phi$ (the $k$th row of $J_f(t,\phi)$ is ) -->


::<div style="text-align: left;font-size: 11pt">\(
f(t ; \hphi) \simeq f(t ; \phis) + \nabla f (t,\phis) (\hphi - \phis) 
\)</div>


where $\nabla f(t,\phis)$ is the gradient of $f$, ''i.e.'' the vector of all first-order partial derivatives of $f(t,\phi)$ with respect to the components of $\phi$, obtained with $\phi=\phis$.


$\nabla f(t,\phis)$ can be estimated by $\nabla f(t,\hphi)$. We can then estimate the variance of $f(t ; \hphi)$ by


::<div style="text-align: left;font-size: 11pt">\(
\widehat{\rm Var}\left(f(t ; \hphi)\right) \simeq \nabla f (t,\hphi)\widehat{\rm Var}(\hphi) \left(\nabla f (t,\hphi) \right)^\prime 
\)</div>
<br><br>

===Estimating confidence intervals by Monte Carlo===

The utilization of Monte Carlo methods to estimate a distribution does not require any approximation of the corresponding model.

We can easily estimate the distribution of $\hpsi$ by simulating a large number $M$ of independent vectors of observations $(y^{(m)},1\leq m \leq M)$  using the predicted distribution of the observed vector $y=(y_j)$. The result obtained is accurate:


::<div style="text-align: left;font-size: 11pt">\(
y^{(m)}_j = f(t_j ;\hpsi) + g(t_j ;\hpsi)\teps^{(m)}_j 
\)</div>


here $\teps^{(m)}_j  \sim_{i.i.d.} {\cal N}(0,1)$.

We can then compute the MLE of $\psi$ for each of these replicates to derive an empirical estimation of the distribution of $\hpsi$.
In particular, we can estimate any quantile of the distribution of $\hpsi_k$ using the empirical quantiles of $(\hpsi^{(m)}_k ,1\leq m \leq M)$.

Any confidence interval for $\psi_k$ (resp. $f(t,\psi_k)$) can then be approximated by a prediction interval for $\hpsi_k$ (resp. $f(t,\hpsi_k)$) .

==A PK  example==



Let us consider a dose $D=50$ mg of a drug that is administrated orally to a patient at time 0. The concentrations of the drug are measured at different times (0.5,  1.0,  1.5,  2.0,  3.0,  4.0,  8.0, 10.0, 12.0, 16.0, 20.0, 24.0).

{| cellpadding="5" cellspacing="5"
| style="width: 600px;" | 
[[File:individual1.png]]
| style="width: 600px;" |
<pre style="background-color:#EFEFEF; font-family:'courier new';font-size:10pt;  border: 1px solid darkgray; border-radius:1em;">
pk1=read.table("example1_data.txt",header=T) 
t=pk1$time  
y=pk1$concentration
plot(t,y,xlab="time(hour)",ylab="concentration(mg/l)",col="blue")   
\right)e^{-k_e \, (t- T_{k0})} & {\rm otherwise}
                        \end{array}

\right.

\end{align}\)
predc1=function(t,x){ 
A=50*x[1]/x[2]/(x[1]-x[3]) 
f=A*(exp(-x[3]*t)-exp(-x[1]*t)) 
}

predc2=function(t,x){ 
A=50/x[1]/x[2]/x[3] 
ff=A*(1-exp(-x[3]*t)) 
ff[t>x[1]]=A*(1-exp(-x[3]*x[1]))*exp(-x[3]*(t[t>x[1]]-x[1])) 
f=ff 
}



We define then the models ${\cal M}_1$ and ${\cal M}_2$ by assuming constant error models:

\(\begin{align} {\cal M}_1 : \quad y_j & = & f_1(t_j ; \phi_1) + a_1\teps_j \\ {\cal M}_2 : \quad y_j & = & f_2(t_j ; \phi_2) + a_2\teps_j \end{align}\)


We can fit these two models to our data by computing $\hpsi_1=(\hphi_1,\hat{a}_1)$ and $\hpsi_2=(\hphi_2,\hat{a}_2)$, the MLEs of $\psi$ under models ${\cal M}_1$ and ${\cal M}_2$.


>>psi1 
[1]    0.3240916 6.001204 0.3239337 0.4366948

>>psi2 
[1]    3.203111 8.999746 0.229977 0.2555242
fmin1=function(x,y,t) 
{f=predc1(t,x) 
g=x[4] 
e=sum( ((y-f)/g)^2 + log(g^2))}

fmin2=function(x,y,t) 
{f=predc2(t,x) 
g=x[4] 
e=sum( ((y-f)/g)^2 + log(g^2))}

pk.nlm1=nlm(fmin1, c(0.3,6,0.2,1), y, t, hessian="true") 
psi1=pk.nlm1$estimate 

pk.nlm2=nlm(fmin2, c(3,10,0.2,4), y, t, hessian="true") 
psi2=pk.nlm2$estimate


Therefore, the estimated parameters $\hphi_1$ and $\hphi_2$ can be used for computing predicted concentrations $\hat{f}_1(t)$ and $\hat{f}_2(t)$ under both models and for any time $t$. We clearly see here that a much better fit is obtained with model ${\cal M}_2$, i.e. assuming a zero order absorption process.


Individual2.png

tc=seq(from=0,to=25,by=0.1) 
fc1=predc1(tc,phi1) 
fc2=predc2(tc,phi2) 
plot(t,y,ylim=c(0,4.1),xlab="time (hour)",ylab="concentration (mg/l)",col="blue") 
lines(tc,fc1,type="l",col="green",lwd=2)
lines(tc,fc2,type="l",col="red",lwd=2)
abline(a=0,b=0,lty=2)
legend(13,4,c("observations","first order absorption","zero order absorption"), 
       lty=c(-1,1,1),pch=c(1,-1,-1),lwd=2,col=c("blue","green","red"))


Another useful goodness of fit plot is obtained by displaying the observations $(y_j)$ versus the predictions $\hat{y}_j=f(t_j ; \hpsi)$ given by the model:

Individual3.png

pk.nlm1=nlm(fmin1,c(0.3,6,0.2,1),y,t,hessian="true")
f2=predc2(t,phi2)
par(mfrow= c(1,2))
plot(f1,y,xlim=c(0,4),ylim=c(0,4),main="model 1")
abline(a=0,b=1,lty=1)
plot(f2,y,xlim=c(0,4),ylim=c(0,4),main="model 2")
abline(a=0,b=1,lty=1)


As we can see in the following paragraph, the Bayesian Information Criteria confirm the zero order absorption process should be selected.

>>bic1
[1]    24.10972

>>bic2 
[1]    11.24769
deviance1=pk.nlm1$minimum + n*log(2*pi)
bic1=deviance1+log(n)*length(psi1)
deviance2=pk.nlm2$minimum + n*log(2*pi) 
bic2=deviance2+log(n)*length(psi2)




We have only considered for the moment constant error models. Nevertheless, the graphic "observations vs predictions" seems to indicate that the amplitude of the residual errors increase with the prediction. We will then consider four different residual error models associated with the same structural model $f_2$.


${\cal M}_2$, constant error model: $y_j=f_2(t_j;\phi_2)+a_2\teps_j$
${\cal M}_3$, proportional error model: $y_j=f_2(t_j;\phi_3)+b_3f_2(t_j;\phi_3)\teps_j$
${\cal M}_4$, combined error model: $y_j=f_2(t_j;\phi_4)+(a_4+b_4f_2(t_j;\phi_4))\teps_j$
${\cal M}_5$, exponential error model: $\log(y_j)=\log(f_2(t_j;\phi_5)) + a_5\teps_j$
fmin3=function(x,y,t)
{f=predc2(t,x) 
g=x[4]*f
e=sum( ((y-f)/g)^2 + log(g^2))} 

fmin4=function(x,y,t) 
{f=predc2(t,x) 
g=abs(x[4])+abs(x[5])*f
e=sum( ((y-f)/g)^2 + log(g^2))} 

fmin5=function(x,y,t)
{f=predc2(t,x) 
g=x[4]
e=sum( ((log(y)-log(f))/g)^2 + log(g^2))}


We can now compute $\hpsi_3=(\hphi_3,\hat{b}_3)$, $\hpsi_4=(\hphi_4,\hat{a}_4,,\hat{b}_4)$ and $\hpsi_5=(\hphi_5,\hat{a}_5)$, the MLEs of $\psi$ under models ${\cal M}_3$, ${\cal M}_4$ and ${\cal M}_5$.

>>psi3
[1]    2.642409 11.44113 0.1838779 0.2189221

>>psi4
[1]    2.890066 10.16836 0.2068221 0.02741416 0.1456332

>>psi5
[1]    2.710984 11.2744 0.188901 0.2310001
pk.nlm3=nlm(fmin3,c(phi2,0.1),y,t,hessian="true") 
psi3=pk.nlm3$estimate
pk.nlm4=nlm(fmin4,c(phi2,1,0.1),y,t,hessian="true")
psi4=pk.nlm4$estimate
psi4[c(4,5)]=abs(psi4[c(4,5)])
pk.nlm5=nlm(fmin5, c(phi2,0.1),y,t,hessian="true")
psi5=pk.nlm5$estimate

As you can see, the three predicted concentrations obtained with models ${\cal M}_3$, ${\cal M}_4$ and ${\cal M}_5$ are very similar

Individual4.png

tc=seq(from=0,to=25,by=0.1) 
fc1=predc1(tc,phi1)
fc2=predc2(tc,phi2) 
plot(t,y,ylim=c(0,4.1),xlab="time (hour)", 
     ylab="concentration (mg/l)",col = "blue") 
lines(tc,fc1, type = "l", col = "green", lwd=2)
lines(tc,fc2, type = "l", col = "red", lwd=2)
abline(a=0,b=0,lty=2)
legend(13,4,c("observations","first order absorption","zero order absorption"), 
       lty=c(-1,1,1), pch=c(1,-1,-1),lwd=2, col=c("blue","green","red"))

BIC confirms that a residual error model including a proportional component should be selected.

>> bic3
[1]    3.443607

>> bic4
[1]    3.475841 

>> bic5
[1]    4.108521
deviance3=pk.nlm3$minimum + n*log(2*pi)
bic3=deviance3 + log(n)*length(psi3)
deviance4=pk.nlm4$minimum + n*log(2*pi)
bic4=deviance4 + log(n)*length(psi4)
deviance5=pk.nlm5$minimum + 2*sum(log(y)) + n*log(2*pi)
bic5=deviance5 + log(n)*length(psi5)

There is no obvious difference between these three error models. We will use the combined error model ${\cal M}_4$ in the following.

A 90% confidence interval for $\psi_4$ can derived from the Hessian matrix of the objective function.

>>ci4
            [,1]        [,2]
[1,]  2.22576690  3.55436561
[2,]  7.93442421 12.40228967
[3,]  0.16628224  0.24736196
[4,] -0.02444571  0.07927403
[5,]  0.04119983  0.25006660
alpha=0.9
df=n-length(phi4)
I4=pk.nlm4$hessian/2
H4=solve(I4)
s4=sqrt(diag(H4)*n/df)
delta4=s4*qt(0.5+alpha/2,df)
ci4=matrix(c(psi4-delta4,psi4+delta4),ncol=2)

as well as a confidence interval for $f_4(t)$ using the Central Limit Theorem

Individual6.png

nlpredci=function(phi,f,H)
{dphi=length(phi)
nf=length(f)
H=H*n/(n-dphi)
S=H[seq(1,dphi),seq(1,dphi)]
G=matrix(nrow=nf,ncol=dphi)
for (k in seq(1,dphi)) {
   dk=phi[k]*(1e-5)
   phid=phi 
   phid[k]=phi[k] + dk
   fd=predc2(tc,phid)
   G[,k]=(f-fd)/dk }
M=rowSums((G%*%S)*G)
deltaf=sqrt(M)*qt(0.5+alpha/2,df)}

deltafc4=nlpredci(phi4,fc4,H4)

par(mfrow= c(1,1))
plot(t,y,ylim=c(0,4.5),xlab="time (hour)",ylab="concentration (mg/l)",col = "blue")
lines(tc,fc4, type = "l", col = "red", lwd=2)
lines(tc,fc4-deltafc4, type = "l", col = "red", lwd=1, lty=3)
lines(tc,fc4+deltafc4, type = "l", col = "red", lwd=1, lty=3)
abline(a=0,b=0,lty=2)
legend(10.5,4.5,c("observed concentrations","predicted concentration","IC for predicted concentration"), 
       lty=c(-1,1,3), pch=c(1,-1,-1), lwd=c(2,2,1), col=c("blue","red","red")) 

Alternatively, prediction intervals for $\hpsi_4$, $\hat{f}_4(t;\hpsi_4)$ and new observations at any time can be estimated by Monte-Carlo

>>ci4s
             [,1]        [,2]
[1,] 2.412131e+00  3.54008925
[2,] 8.554283e+00 11.89304332
[3,] 1.829453e-01  0.23922393
[4,] 1.558052e-08  0.08006378
[5,] 1.233954e-02  0.19552977

Individual7.png

f=predc2(t,phi4)
a4=psi4[4]
b4=psi4[5]
g=a4+b4*f
dpsi=length(psi4)
nc=length(tc)
N=1000
qalpha=c(0.5 - alpha/2,0.5 + alpha/2)
PSI=matrix(nrow=N,ncol=dpsi)
FC=matrix(nrow=N,ncol=nc)
Y=matrix(nrow=N,ncol=nc)
for (k in seq(1,N)) {
   eps=rnorm(n)
   ys=f+g*eps
   pk.nlm=nlm(fmin4, psi4, ys, t)
   psie=pk.nlm$estimate
   psie[c(4,5)]=abs(psie[c(4,5)])
   PSI[k,]=psie
   fce=predc2(tc,psie[c(1,2,3)])
   FC[k,]=fce
   gce=a4+b4*fce
   Y[k,]=fce + gce*rnorm(1)
}

ci4s=matrix(nrow=dpsi,ncol=2)
for (k in seq(1,dpsi)){
   ci4s[k,]=quantile(PSI[,k],qalpha,names=FALSE)}

cifc4s=matrix(nrow=nc,ncol=2)
for (k in seq(1,nc)){
   cifc4s[k,]=quantile(FC[,k],qalpha,names=FALSE)}

ciy4s=matrix(nrow=nc,ncol=2)
for (k in seq(1,nc)){
   ciy4s[k,]=quantile(Y[,k],qalpha,names=FALSE)}
par(mfrow= c(1,1))
plot(t,y,ylim=c(0,4.5),xlab="time (hour)",ylab="concentration (mg/l)",col = "blue")
lines(tc,fc4, type = "l", col = "red", lwd=2)
lines(tc,cifc4s[,1], type = "l", col = "red", lwd=1, lty=3)
lines(tc,cifc4s[,2], type = "l", col = "red", lwd=1, lty=3)
lines(tc,ciy4s[,1], type = "l", col = "green", lwd=1, lty=3)
lines(tc,ciy4s[,2], type = "l", col = "green", lwd=1, lty=3)
abline(a=0,b=0,lty=2)
legend(10.5,4.5,c("observed concentrations","predicted concentration","IC for predicted concentration",
       "IC for observed concentrations"),lty=c(-1,1,3,3), pch=c(1,-1,-1,-1), lwd=c(2,2,1,1), 
       col=c("blue","red","red","green")) 


The R code and the input data used in this section to illustrate a basic PK example can be downloaded using the following link : https://wiki.inria.fr/wikis/popix/images/a/a1/R_IndividualFitting.rar.