Difference between revisions of "The individual approach"

From Popix
Jump to navigation Jump to search
m (A PK example)
m (Selecting the error model)
 
(414 intermediate revisions by 2 users not shown)
Line 1: Line 1:
<!-- some LaTeX macros we want to use: -->
 
$
 
\DeclareMathOperator{\argmin}{arg\,min}
 
\newcommand{\psis}{\psi{^\star}}
 
\newcommand{\phis}{\phi{^\star}}
 
\newcommand{\hpsi}{\hat{\psi}}
 
\newcommand{\hphi}{\hat{\phi}}
 
\newcommand{\teps}{\tilde{\varepsilon}}
 
\newcommand{\limite}[2]{\mathop{\longrightarrow}\limits_{\mathrm{#1}}^{\mathrm{#2}}}
 
\newcommand{\DDt}[1]{\partial^2_\theta #1}
 
$
 
  
 +
== Overview ==
  
=='''Models and methods'''==
+
Before we start looking at modeling a whole population at the same time, we are going to consider only one individual from that population. Much of the basic methodology for modeling one individual follows through to population modeling. We will see that when stepping up from one individual to a population, the difference is that some parameters shared by individuals are considered to be drawn from a [http://en.wikipedia.org/wiki/Probability_distribution probability distribution].
  
 +
Let us begin with a simple  example.
 +
An individual receives 100mg of a drug at time $t=0$. At that time and then every hour for fifteen hours, the
 +
concentration of a marker in the bloodstream is measured and plotted against time:
  
A model for continuous data:
+
::[[File:New_Individual1.png|link=]]
  
::<div style="text-align: left;font-size: 11pt"><math>\begin{align}
+
We aim to find a mathematical model to describe what we see in the figure. The eventual goal is then to extend this approach to the ''simultaneous modeling'' of a whole population.
y_{j} &=& f(t_j ; \psi) + \varepsilon_j \quad ; \quad  1\leq j \leq n
 
\\
 
&=& f(t_j ; \psi) + g(t_j ; \psi) \tilde{\varepsilon_j}
 
\end{align}</math></div>
 
  
  
* $f$ : structural model
+
<br>
* $\psi=(\psi_1, \psi_2, \ldots, \psi_d)$ :  vector of parameters
 
* $(t_1,t_2,\ldots , t_n)$ : observation times
 
* $(\varepsilon_j, \varepsilon_2, \ldots, \varepsilon_n)$ : residual errors ($\Epsilon({\varepsilon_j}) =0$)
 
* $g$ : { residual error model}
 
* $(\tilde{\varepsilon_1}, \tilde{\varepsilon_2}, \ldots, \tilde{\varepsilon_n})$ : normalized residual errors $(Var(\tilde{\varepsilon_j}) =1)$
 
  
 +
== Model and methods for the individual approach ==
  
 +
<br>
 +
===Defining a model===
  
;Some residual error models:
+
In our example, the concentration is a ''continuous'' variable, so we will  try to use continuous functions to model it.
 +
Different types of data  (e.g., [http://en.wikipedia.org/wiki/Count_data count data], [http://en.wikipedia.org/wiki/Categorical_data categorical data], [http://en.wikipedia.org/wiki/Survival_analysis time-to-event data], etc.) require different types of models. All of these data types will be considered in due time, but for now let us concentrate on a continuous data model.
  
 +
A model for continuous data can be represented mathematically as follows:
  
{| {| align=left; style="width: 600px" cellpadding="8" cellspacing="0"
+
{{Equation1
| - constant error model ||  $y=f+a\tilde{\varepsilon}$, || $g=a$ 
+
|equation=<math>
|-
+
y_{j} = f(t_j ; \psi) + e_j, \quad \quad  1\leq j \leq n, </math> }}
| - proportional error model || $y=f+bf\tilde{\varepsilon}$, || $g=b\, f$
 
|-
 
| - combined error model 1 || $y=f+(a+bf)\tilde{\varepsilon}$, || $g=a+b f$
 
|-
 
| - combined error model 2 || $y=f+\sqrt{a^2+b^2f^2}\tilde{\varepsilon}$, || $g^2=a^2+b^2f^2$
 
|-
 
|- exponential error model || $\log(y)=\log(f) + a\tilde{\varepsilon}$ || $g=a$
 
|}
 
  
 +
where:
  
  
Some tasks in the context of modelling, ''i.e.'' when a vector of observations $(y_j)$ is available:
+
* $f$ is called the ''structural model''. It corresponds to the basic type of curve we suspect the data is following, e.g., linear, logarithmic, exponential, etc. Sometimes, a model of the associated biological processes leads to equations that define the curve's shape.
  
 +
* $(t_1,t_2,\ldots , t_n)$  is the vector of observation times. Here, $t_1 = 0$ hours and $t_n = t_{16} = 15$ hours.
  
* Simulate a vector of observations $(y_j)$ for a given model and a given parameter  $\psi$,
+
* $\psi=(\psi_1, \psi_2, \ldots, \psi_d)$   is a vector of $d$ parameters that influences the value of $f$.
* Estimate the vector of parameters $\psi$ for a given model,
 
* Select the structural model $f$
 
* Select the residual error model $g$
 
* Assess/validate the selected model
 
  
 +
* $(e_1, e_2, \ldots, e_n)$  are called the ''residual errors''. Usually, we suppose that they come from some centered probability distribution: $\esp{e_j} =0$.
  
Maximum likelihood estimation of the parameters: $\hat{\psi}$ maximizes $L(\psi ; y_1,y_2,\ldots,y_j)$
 
  
where
+
In fact, we usually state a continuous data model in a slightly more flexible way:
  
 
+
{{EquationWithRef
::<div style="text-align: left;font-size: 11pt"><math>
+
|equation=<div id="cont"><math>
L(\psi ; y_1,y_2,\ldots,y_j) {\overset{def}{=}} p_Y( y_1,y_2,\ldots,y_j ; \psi)
+
y_{j} = f(t_j ; \psi) + g(t_j ; \psi)\teps_j , \quad \quad  1\leq j \leq n,
 
</math></div>
 
</math></div>
 +
|reference=(1) }}
  
 +
where now:
  
If we assume that $\bar{\varepsilon_i} \sim_{i.i.d} {\cal N}(0,1)$, then, the $y_i$'s are independent and
 
  
 +
<ul>
 +
* $g$  is called the ''residual error model''. It may be a function of the time $t_j$ and parameters $\psi$.
  
::<div style="text-align: left;font-size: 11pt"><math>
+
* $(\teps_1, \teps_2, \ldots, \teps_n)$  are the ''normalized'' residual errors. We suppose that these come from a probability distribution which is centered and has unit variance: $\esp{\teps_j} = 0$ and $\var{\teps_j} =1$.
y_{j} \sim {\cal N}(f(t_j ; \psi) , g(t_j ; \psi)^2)
+
</ul>
</math></div>
 
  
 +
<br>
  
and the p.d.f of $(y_1, y_2, \ldots y_n)$ can  be computed:
+
===Choosing a residual error model===
  
  
::<div style="text-align: left;font-size: 11pt"><math>\begin{align}
+
The choice of a residual error model $g$ is very flexible, and allows us to account for many different hypotheses we may have on the error's distribution. Let $f_j=f(t_j;\psi)$. Here are some simple error models.
p_Y(y_1, y_2, \ldots y_n ; \psi) &=& \prod_{j=1}^n p_{Y_j}(y_j ; \psi) \\ \\
 
&&  \frac{e^{-\frac{1}{2} \sum_{j=1}^n \left( \frac{y_j - f(t_j ; \psi)}{g(t_j ; \psi)} \right)^2}}{\prod_{j=1}^n \sqrt{2\pi g(t_j ; \psi)}}
 
\end{align}</math></div>
 
  
  
 +
<ul>
 +
* ''Constant error model'': $g=a$. That is,  $y_j=f_j+a\teps_j$.
  
Maximizing the likelihood is equivalent to minimizing the deviance (-2 $\times$ log-likelihood) which plays here the role of the objective function:
 
  
::<div style="text-align: left;font-size: 11pt"><math>\begin{align}
+
* ''Proportional error model'': $g=b\,f$. That is, $y_j=f_j+bf_j\teps_j$. This is for when we think the magnitude of the error is proportional to the value of the predicted value $f$.
\hat{\psi} = \argmin_{\psi} \left\{ \sum_{j=1}^n \log(g(t_j ; \psi)^2) + \sum_{j=1}^n \left( \frac{y_j - f(t_j ; \psi)}{g(t_j ; \psi) }\right)^2 \right \}
 
\end{align}</math></div>
 
  
  
and the deviance is therefore
+
* ''Combined error model'': $g=a+b f$. Here, $y_j=f_j+(a+bf_j)\teps_j$.
  
  
::<div style="text-align: left;font-size: 11pt"><math>\begin{align}
+
* ''Alternative combined error model'': $g^2=a^2+b^2f^2$. Here, $y_j=f_j+\sqrt{a^2+b^2f_j^2}\teps_j$.
-2 LL(\hat{\psi} ; y_1,y_2,\ldots,y_j) = \sum_{j=1}^n \log(g(t_j ; \hat{\psi})^2+ \sum_{j=1}^n \left(\frac{y_j - f(t_j ; \hat{\psi})}{g(t_j ; \hat{\psi})}\right)^2 +n\log(2\pi)
 
\end{align}</math></div>
 
  
  
This minimization problem usually does not have an analytical solution for a non linear model. Some optimization procedure should be used.
+
* ''Exponential error model'': here, the model is instead $\log(y_j)=\log(f_j) + a\teps_j$, that is, $g=a$. It is exponential in the sense that if we exponentiate, we end up with $y_j = f_j e^{a\teps_j}$.
 +
</ul>
  
  
Some specific models have specific solutions. For instance, for a constant error model:
+
<br>
  
::<div style="text-align: left;font-size: 12pt"><math> y_{j} = f(t_j ; \phi)  + a \, \bar{\varepsilon_j}</math></div>
+
===Tasks===
  
 +
To model a vector of observations $y = (y_j,\, 1\leq j \leq n$) we must perform several tasks:
  
we have
+
<ul>
 +
* Select a structural model $f$ and a residual error model $g$.
  
::<div style="text-align: left;font-size: 11pt"><math>\begin{align}
 
\hat{\phi} &=& \argmin_{\psi}  \sum_{j=1}^n \left( y_j - f(t_j ; \phi)\right)^2 \\ \\
 
\hat{a}&=& \frac{1}{n}\sum_{j=1}^n \left( y_j - f(t_j ; \hat{\phi})\right)^2 \\ \\
 
-2 LL(\hat{\psi} ; y_1,y_2,\ldots,y_j) &=&  \sum_{j=1}^n \log(\hat{a}^2) + n +n\log(2\pi)
 
\end{align}</math></div>
 
  
 +
* Estimate the model's parameters $\psi$.
  
  
For a linear model:
+
* ''Assess and validate'' the selected model.
 +
</ul>
  
::<div style="text-align: left;font-size: 11pt"><math>
 
y_{j} = F_j \, \phi  + a \, \bar{\varepsilon_j} \quad ; \quad  1\leq j \leq n
 
</math></div>
 
  
  
Let
+
<br>
 +
=== Selecting structural and residual error models ===
  
::<div style="text-align: left;font-size: 11pt"><math>
+
As we are interested in [http://en.wikipedia.org/wiki/Parametric_model parametric modeling], we must choose parametric structural and residual error models. In the absence of biological (or other) information, we might suggest possible structural models just by looking at the graphs of time-evolution of the data. For example, if $y_j$ is increasing with time, we might suggest an affine, quadratic or logarithmic model, depending on the approximate trend of the data. If $y_j$ is instead decreasing ever slower to zero, an exponential model might be appropriate.
F = \left(\begin{array}{c} F_1 \\ F_2 \\ \vdots \\F_n \\ \end{array} \right)
 
\quad ; \quad
 
y = \left(\begin{array}{c} y_1 \\ y_2 \\ \vdots \\y_n \\ \end{array} \right) 
 
</math></div>
 
  
 +
However, often  we have biological (or other) information to help us make our choice. For instance, if we have a system of [http://en.wikipedia.org/wiki/Differential_equation differential equations] describing how the drug is eliminated from the body, its solution may provide the formula (i.e., structural model) we are looking for.
  
Then
+
As for the residual error model, if it is not immediately obvious which one to choose, several can be tested in conjunction with one or several possible structural models. After parameter estimation, each structural and residual error model pair can be assessed, compared against the others, and/or validated in various ways.
  
::<div style="text-align: left;font-size: 11pt"><math>\begin{align}
+
Now we can have a first look at parameter estimation, and further on, model assessment and validation.
\hat{\phi} &=& (F^\prime F)^{-1} F^\prime y \\
 
\hat{a}&=& \frac{1}{n}\sum_{j=1}^n \left( y_j - F \hat{\phi})\right)^2
 
\end{align}</math></div>
 
  
  
 +
<br>
 +
===Parameter estimation===
  
  
 +
Given the observed data and the choice of a parametric model to describe it, our goal becomes to find the "best" parameters for the model. A traditional framework to solve this kind of problem is called [http://en.wikipedia.org/wiki/Maximum_likelihood maximum likelihood estimation] or MLE, in which the "most likely" parameters are found, given the data that was observed.
  
===='''Computing the Fisher Information Matrix'''====
+
The likelihood $L$ is a function defined as:
  
Let $\psis$ be the true unknown value of $\psi$, and let $\hat{\psi}$ be the maximum likelihood estimate of $\psi$.
+
{{Equation1
If the observed likelihood function is sufficiently smooth, asymptotic theory for maximum-likelihood estimation holds and
+
|equation=<math> L(\psi ; y_1,y_2,\ldots,y_n) \ \ \eqdef \ \ \py( y_1,y_2,\ldots,y_n; \psi) , </math> }}
  
::<div style="text-align: left;font-size: 11pt"><math>
+
i.e., the conditional [http://en.wikipedia.org/wiki/Joint_probability_distribution joint density function] of $(y_j)$ given the parameters $\psi$, but looked at as if the data are known and the parameters not. The $\hat{\psi}$ which maximizes $L$ is known as the ''maximum likelihood estimator''.
I_n(\psis)^{\frac{1}{2}}(\hat{\psi}-\psi{^\star}) \limite{n\to \infty}{} {\mathcal N}(0,{\rm Id})
 
</math></div>
 
  
 +
Suppose that we have chosen a structural model $f$ and residual error model $g$. If we assume for instance that $ \teps_j \sim_{i.i.d} {\cal N}(0,1)$, then the $y_j$ are independent of each other and [[#cont|(1)]] means that:
  
where
+
{{Equation1
 +
|equation=<math> y_{j} \sim {\cal N}\left(f(t_j ; \psi) , g(t_j ; \psi)^2\right), \quad \quad  1\leq j \leq n .</math> }}
  
::<div style="text-align: left;font-size: 11pt"><math>
+
Due to this independence, the pdf of $y = (y_1, y_2, \ldots, y_n)$ is the product of the pdfs of each $y_j$:
I_n(\psi{^\star})= - \DDt LL(\psis;y_1,y_2,\ldots,y_n)
 
</math></div>
 
  
 +
{{Equation1
 +
|equation=<math>\begin{eqnarray}
 +
\py(y_1, y_2, \ldots y_n ; \psi) &=& \prod_{j=1}^n \pyj(y_j ; \psi) \\ \\
 +
& = &  \frac{1}{\prod_{j=1}^n \sqrt{2\pi} g(t_j ; \psi)} \  {\rm exp}\left\{-\frac{1}{2} \sum_{j=1}^n \left( \displaystyle{ \frac{y_j - f(t_j ; \psi)}{g(t_j ; \psi)} }\right)^2\right\} .
 +
\end{eqnarray}</math> }}
  
is the observed Fisher information matrix. Thus, an estimate of the covariance of $\hpsi$ is the inverse of the observed Fisher information matrix:
+
This is the same thing as the likelihood function $L$ when seen as a function of $\psi$. Maximizing $L$ is equivalent to minimizing the deviance, i.e., -2 $\times$ the $\log$-likelihood ($LL$):
  
::<div style="text-align: left;font-size: 11pt"><math>
+
{{EquationWithRef
C(\hpsi) = \left(- \DDt LL(\hpsi ; y_1,y_2,\ldots,y_n) \right)^{-1}
+
|equation=<div id="LLL"><math>\begin{eqnarray}
</math></div>
+
\hat{\psi} &=&  \argmin{\psi} \left\{ -2 \,LL \right\}\\
 +
&=& \argmin{\psi} \left\{
 +
\sum_{j=1}^n \log\left(g(t_j ; \psi)^2\right) + \sum_{j=1}^n \left(\displaystyle{ \frac{y_j - f(t_j ; \psi)}{g(t_j ; \psi)} }\right)^2 \right\} .
 +
\end{eqnarray}</math></div>
 +
|reference=(2) }}
  
  
 +
This minimization problem does not usually have an [http://en.wikipedia.org/wiki/Analytical_expression analytical solution] for nonlinear models, so an [http://en.wikipedia.org/wiki/Mathematical_optimization optimization] procedure needs to be used.
 +
However, for a few specific models, analytical solutions do exist.
  
 +
For instance, suppose we have a constant error model: $y_{j} = f(t_j ; \psi)  + a \, \teps_j,\,\,  1\leq j \leq n,$ that is: $g(t_j;\psi) = a$. In practice, $f$ is not itself a function of $a$, so we can write $\psi = (\phi,a)$ and therefore: $y_{j} = f(t_j ; \phi)  + a \, \teps_j.$ Thus, [[#LLL|(2)]] simplifies to:
  
===='''Deriving confidence intervals for the parameters'''====
+
{{Equation1
 +
|equation=<math> (\hat{\phi},\hat{a}) \ \ = \ \ \argmin{(\phi,a)} \left\{
 +
n \log(a^2)  + \sum_{j=1}^n \left(\displaystyle{ \frac{y_j - f(t_j ; \phi)}{a} }\right)^2 \right\} .
 +
</math> }}
  
Let $\psi_k$ be the $k$th component of $\psi$, $k=1,2,\ldots,d$.
+
The solution is then:
$\psi_k$ is estimated by $\hpsi_k$, the $k$th component of $\hpsi$.
 
  
$\hpsi_k$ is a random variable that converges to $\psi_k^{\star}$ when $n \to \infty$.
+
{{Equation1
 +
|equation=<math>\begin{eqnarray}
 +
\hat{\phi} &=& \argmin{\phi}  \sum_{j=1}^n \left( y_j - f(t_j ; \phi)\right)^2 \\
 +
\hat{a}^2&=&  \frac{1}{n}\sum_{j=1}^n \left( y_j - f(t_j ; \hat{\phi})\right)^2 ,
 +
\end{eqnarray} </math> }}
  
 +
where $\hat{a}^2$ is found by setting the [http://en.wikipedia.org/wiki/Partial_derivative partial derivative] of $-2LL$ to zero.
  
An estimator of its variance is the $k$th element of the diagonal of $C(\hpsi)$:
+
Whether this has an analytical solution or not depends on the form of $f$. For example, if $f(t_j;\phi)$ is just a linear function of the components of the vector $\phi$, we can represent it as a matrix $F$ whose $j$th row gives the coefficients at time $t_j$. Therefore, we have the matrix equation $y = F \phi + a \teps$.
  
::<div style="text-align: left;font-size: 11pt"><math>
+
The solution for $\hat{\phi}$ is thus the least-squares one, and for $\hat{a}^2$ it is the same as before:
\widehat{\rm Var}(\hpsi_k) = C_{kk}(\hpsi)
 
</math></div>
 
  
 
+
{{Equation1
We can then derive an estimator of its standard error
+
|equation=<math>\begin{eqnarray}
 
+
\hat{\phi} &=& (F^\prime F)^{-1} F^\prime y \\
::<div style="text-align: left;font-size: 11pt"><math>
+
\hat{a}^2&=& \frac{1}{n}\sum_{j=1}^n \left( y_j - F_j \hat{\phi}\right)^2 . \\
\widehat{\rm s.e}(\hpsi_k) = \sqrt{C_{kk}(\hpsi)}
+
\end{eqnarray}</math> }}
</math></div>
 
 
 
and a confidence interval for $\psi_k^\star$ constructed at a confidence level $\alpha$:
 
 
 
::<div style="text-align: left;font-size: 11pt"><math>
 
{\rm CI}(\psi_k) = [\hpsi_k + \widehat{\rm s.e}(\hpsi_k)q((1-\alpha)/2) , \hpsi_k + \widehat{\rm s.e}(\hpsi_k)q((1+\alpha)/2)]
 
</math></div>
 
 
 
where $q(\alpha)$ is the quantile of order $\alpha$ of a ${\cal N}(0,1)$ distribution.
 
 
 
 
 
<u>''Remark:''</u> <br>
 
:The normal distribution for $\hpsi/\widehat{\rm s.e}(\hpsi_k)$ is a "good" approximation only when the number of observations $n$ is large.
 
:Better approximation should be used for small $n$.
 
:In the model $y_j = f(t_j ; \phi) + a\teps_j$, the distribution of $\hat{a}^2$ can be approximated by a chi-square distribution with $(n-d_\phi)$ degrees of freedom, :where $d_\phi$ is the size of $\phi$.
 
:The quantiles of the normal distribution can then be replaced by the quantiles of a $t$-distribution with $(n-d_\phi)$ df.
 
<!--$${\rm CI}(\psi_k) = [\hpsi_k - \widehat{\rm s.e}(\hpsi_k)q((1-\alpha)/2,n-d) , \hpsi_k + \widehat{\rm s.e}(\hpsi_k)q((1+\alpha)/2,n-d)]$$-->
 
<!--where $q(\alpha,\nu)$ is the quantile of order $\alpha$ of a $t$-distribution with $\nu$ degrees of freedom.-->
 
  
  
  
 +
<br>
 +
===Computing the Fisher information matrix===
  
===='''Deriving confidence intervals for the predictions:'''====
+
The [http://en.wikipedia.org/wiki/Fisher_information Fisher information] is a way of measuring the amount of information that an observable random variable carries about an unknown parameter upon which its probability distribution depends.
  
The regression model (or structural model) can be predicted for any $t$ by
+
Let $\psis $ be the true unknown value of $\psi$, and let $\hatpsi$ be the maximum likelihood estimate of $\psi$. If the observed likelihood function is sufficiently smooth, asymptotic theory for maximum-likelihood estimation holds and
  
::<div style="text-align: left;font-size: 11pt"><math>
+
{{EquationWithRef
\hat{f}(t,\phi) = f(t ; \hphi)
+
|equation=<div id="intro_individualCLT"><math>
 +
I_n(\psis)^{\frac{1}{2} }(\hatpsi-\psis) \limite{n\to \infty}{} {\mathcal N}(0,\id) ,
 
</math></div>
 
</math></div>
 +
|reference=(3) }}
  
 +
where $I_n(\psis)$ is (minus) the Hessian (i.e., the matrix of the second derivatives) of the log-likelihood:
  
It is then possible for any $t$ to derive a confidence interval for $f(t,\phi)$ using the estimated variance of $\hphi$.
+
{{Equation1
Indeed, as a first approximation, we have:
+
|equation=<math>I_n(\psis)=-  \displaystyle{ \frac{\partial^2}{\partial \psi \partial \psi^\prime} } LL(\psis;y_1,y_2,\ldots,y_n)
 +
</math> }}
  
 +
is the ''observed Fisher information matrix''. Here, "observed" means that it is a function of observed variables $y_1,y_2,\ldots,y_n$.
  
::<div style="text-align: left;font-size: 11pt"><math>
+
Thus, an estimate of the covariance of $\hatpsi$ is the inverse of the observed Fisher information matrix as expressed by the formula:
f(t ; \hphi) \simeq f(t ; \phi) + J_f(\phi) (\hphi - \phi)
 
</math></div>
 
  
 +
{{Equation1
 +
|equation=<math>C(\hatpsi) = - I_n(\hatpsi)^{-1} . </math> }}
  
where $J_f(t,\phi)$ is the Jacobian matrix of $f$, ''i.e.'' the matrix of all first-order partial derivatives of $f(t,\phi)$ with respect to the components of $\phi$ (the $k$th row of $J_f(t,\phi)$ is )
 
  
  
::<div style="text-align: left;font-size: 11pt"><math>
+
<br>
f(t ; \hphi) \simeq f(t ; \phis) + \nabla f (t,\phis) (\hphi - \phis)
+
===Deriving confidence intervals for parameters===
</math></div>
 
  
 +
Let $\psi_k$ be the $k$th of $d$ components of $\psi$. Imagine that we have estimated $\psi_k$ with $\hatpsi_k$, the $k$th component of the MLE $\hatpsi$, that is, a random variable that converges to $\psi_k^{\star}$ when $n \to \infty$ under very general conditions.
  
where $\nabla f(t,\phis)$ is the gradient of $f$, {\it i.e.} the vector of all first-order partial derivatives of $f(t,\phi)$ with respect to the components of $\phi$, obtained with $\phi=\phis$.
+
An estimator of its variance is the $k$th element of the diagonal of the covariance matrix $C(\hatpsi)$:
  
$\nabla f(t,\phis)$ can be estimated by $\nabla f(t,\hphi)$. We can then estimate the variance of $f(t ; \hphi)$ by
+
{{Equation1
 +
|equation=<math>\widehat{\rm Var}(\hatpsi_k) = C_{kk}(\hatpsi) .</math> }}
  
 +
We can thus derive an estimator of its [http://en.wikipedia.org/wiki/Standard_error standard error]:
 +
{{Equation1
 +
|equation=<math>\widehat{\rm s.e.}(\hatpsi_k) = \sqrt{C_{kk}(\hatpsi)} ,</math> }}
  
::<div style="text-align: left;font-size: 11pt"><math>
+
and a [http://en.wikipedia.org/wiki/Confidence_interval confidence interval] of level $1-\alpha$ for $\psi_k^\star$:
\widehat{\rm Var}\left(f(t ; \hphi)\right) \simeq \nabla f (t,\hphi)\widehat{\rm Var}(\hphi) \left(\nabla f (t,\hphi) \right)^\prime
 
</math></div>
 
  
 +
{{Equation1
 +
|equation=<math>{\rm CI}(\psi_k^\star) = \left[\hatpsi_k + \widehat{\rm s.e.}(\hatpsi_k)\,q\left(\frac{\alpha}{2}\right), \ \hatpsi_k + \widehat{\rm s.e.}(\hatpsi_k)\,q\left(1-\frac{\alpha}{2}\right)\right] , </math> }}
  
 +
where $q(w)$ is the [http://en.wikipedia.org/wiki/Quantile quantile] of order $w$ of a ${\cal N}(0,1)$ distribution.
  
  
===='''Estimating confidence intervals by Monte Carlo:'''====
+
{{Remarks
 +
|title=Remarks
 +
|text= Approximating the fraction $\hatpsi/\widehat{\rm s.e}(\hatpsi_k)$ by the normal distribution is a "good" approximation only when the number of observations $n$ is large. A better approximation should be used for small $n$. In the model $y_j = f(t_j ; \phi) + a\teps_j$, the distribution of $\hat{a}^2$ can be approximated by a [http://en.wikipedia.org/wiki/Chi-squared_distribution chi-squared  distribution] with $(n-d_\phi)$ [http://en.wikipedia.org/wiki/Degrees_of_freedom_%28statistics%29 degrees of freedom], where $d_\phi$ is the dimension of $\phi$. The quantiles of the normal distribution can then be replaced by those of a [http://en.wikipedia.org/wiki/Student%27s_t-distribution Student's $t$-distribution] with $(n-d_\phi)$ degrees of freedom.
 +
<!-- %$${\rm CI}(\psi_k) = [\hatpsi_k - \widehat{\rm s.e}(\hatpsi_k)q((1-\alpha)/2,n-d) , \hatpsi_k + \widehat{\rm s.e}(\hatpsi_k)q((1+\alpha)/2,n-d)]$$ -->
 +
<!--  %where $q(\alpha,\nu)$ is the quantile of order $\alpha$ of a $t$-distribution with $\nu$ degrees of freedom. -->
 +
}}
  
Estimating any distribution using Monte Carlo method does not require any approximation of the model.
 
  
We can easily estimate accurately the distribution of $\hpsi$ by simulating a large number $M$ of independent vectors of observations $(y^{(m)},1\leq m \leq M)$  using the estimated distribution of the observed vector $y=(y_j)$:
+
<br>
 +
===Deriving confidence intervals for predictions===
  
::<div style="text-align: left;font-size: 11pt"><math>
 
y^{(m)}_j = f(t_j ;\hpsi) + g(t_j ;\hpsi)\teps^{(m)}_j
 
</math></div>
 
  
 +
The structural model $f$ can be predicted for any $t$ using the estimated value $f(t; \hatphi)$. For that $t$, we can then derive a confidence interval for $f(t,\phi)$ using the estimated variance of $\hatphi$. Indeed, as a first approximation we have:
  
where $\teps^{(m)}_j  \sim_{i.i.d.} {\cal N}(0,1)$.
 
  
We can then compute the MLE of $\psi$ for each of these replicates to derive an empirical estimation of the distribution of $\hpsi$.
+
{{Equation1
In particular, we can estimate any quantile of the distribution of $\hpsi_k$ using the empirical quantiles of $(\hpsi^{(m)}_k ,1\leq m \leq M)$.
+
|equation=<math> f(t ; \hatphi) \simeq f(t ; \phis) + \nabla f (t,\phis) (\hatphi - \phis) ,</math> }}
  
Any confidence interval for $\psi_k$ (resp. $f(t,\psi_k)$) can then be approximated by a prediction interval for $\hpsi_k$ (resp. $f(t,\hpsi_k)$) .
+
where $\nabla f(t,\phis)$ is the gradient of $f$ at $\phis$, i.e., the vector of the first-order partial derivatives of $f$ with respect to the components of $\phi$, evaluated at $\phis$. Of course, we do not actually know $\phis$, but we can estimate $\nabla f(t,\phis)$ with $\nabla f(t,\hatphi)$. The variance of $f(t ; \hatphi)$ can then be estimated by
  
=='''A PK  example'''==
+
{{Equation1
 +
|equation=<math>
 +
\widehat{\rm Var}\left(f(t ; \hatphi)\right) \simeq \nabla f (t,\hatphi)\widehat{\rm Var}(\hatphi) \left(\nabla f (t,\hatphi) \right)^\prime . </math> }}
  
 +
We can then derive an estimate of the standard error of $f (t,\hatphi)$ for any $t$:
  
 +
{{Equation1
 +
|equation=<math>\widehat{\rm s.e.}(f(t ; \hatphi)) = \sqrt{\widehat{\rm Var}\left(f(t ; \hatphi)\right)} , </math> }}
  
A dose of $D=50$ mg of a drug is administrated orally to a patient at time 0, <br>
+
and a confidence interval of level $1-\alpha$ for $f(t ; \phi^\star)$:
Concentrations of the drug are measured at times (0.5,  1.0,  1.5,  2.0,  3.0,  4.0,  8.0, 10.0, 12.0,
 
16.0, 20.0, 24.0).
 
  
{| cellpadding="2" cellspacing="0"|
+
{{Equation1
|[[File:individual1.png]]
+
|equation=<math>{\rm CI}(f(t ; \phi^\star)) = \left[f(t ; \hatphi) + \widehat{\rm s.e.}(f(t ; \hatphi))\,q\left(\frac{\alpha}{2}\right), \ f(t ; \hatphi) + \widehat{\rm s.e.}(f(t ; \hatphi))\,q\left(1-\frac{\alpha}{2}\right)\right].</math> }}
||  <code> <div style=" background-color:#EEEFEF; font-family:'courier new';font-size:10pt;border: 1px solid darkgray;">
 
pk1=read.table("example1_data.txt",header=T) <br>
 
t=pk1$time 
 
y=pk1$concentration
 
plot(t,y,xlab="time(hour)",ylab="concentration(mg/l)",col="blue")   </div> </code>
 
|}
 
  
  
  
===='''Two structural models:'''====
+
<br>
 +
===Estimating confidence intervals using Monte Carlo simulation===
  
: 1. One compartment model, first order absorption and linear elimination
+
The use of [http://en.wikipedia.org/wiki/Monte_Carlo_method Monte Carlo methods] to estimate a distribution does not require any approximation of the  model.
  
 +
We proceed in the following way. Suppose we have found a MLE $\hatpsi$ of $\psi$. We then simulate a data vector $y^{(1)}$ by first randomly generating the vector $\teps^{(1)}$ and then calculating for $1 \leq j \leq n$,
  
::<div style="text-align: left;font-size: 11pt"><math>\begin{align}
+
{{Equation1
\phi_1 &=& (k_a, V, k_e) \\
+
|equation=<math> y^{(1)}_j = f(t_j ;\hatpsi) + g(t_j ;\hatpsi)\teps^{(1)}_j . </math> }}
f_1(t ; \phi_1) &=& \frac{D\, k_a}{V(k_a-k_e)} \left( e^{-k_e \, t} - e^{-k_a \, t}\right)
 
\end{align}</math></div>
 
  
 +
In a sense, this gives us an example of "new" data from the "same" model. We can then compute a new MLE $\hat{\psi}^{(1)}$ of $\psi$ using $y^{(1)}$.
  
 +
Repeating this process $M$ times gives $M$ estimates of $\psi$ from which we can obtain an empirical estimation of the distribution of $\hatpsi$, or any quantile we like.
  
: 2. One compartment model, zero order absorption and linear elimination
+
Any confidence interval for $\psi_k$ (resp. $f(t,\psi_k)$) can then be approximated by a prediction interval for $\hatpsi_k$ (resp. $f(t,\hatpsi_k)$). For instance, a two-sided confidence interval of level  $1-\alpha$ for $\psi_k^\star$ can be estimated by the prediction interval
::<div style="text-align: left;font-size: 11pt"><math>\begin{align}
 
\phi_2 &=& (T_{k0}, V, k_e) \\
 
f_2(t ; \phi_2) &=& \left\{  \begin{array}{ll}
 
                        \frac{D}{V \,T_{k0} \, k_e} \left( 1- e^{-k_e \, t} \right) & {\rm if } t\leq T_{k0} \\[0.3cm]
 
                        \frac{D}{V \,T_{k0} \, k_e} \left( 1- e^{-k_e \, T_{k0}} \right)e^{-k_e \, (t- T_{k0})} & {\rm otherwise}
 
                        \end{array}
 
\right.
 
\end{align}</math></div>
 
  
 +
{{Equation1
 +
|equation=<math> [\hat{\psi}_{k,([\frac{\alpha}{2} M])} \ , \ \hat{\psi}_{k,([ (1-\frac{\alpha}{2})M])} ], </math> }}
  
 +
where $[\cdot]$ denotes the [http://en.wikipedia.org/wiki/Floor_and_ceiling_functions integer part] and  $(\psi_{k,(m)},\ 1 \leq m \leq M)$ the order statistic, i.e., the parameters $(\hatpsi_k^{(m)}, 1 \leq m \leq M)$ reordered so that $\hatpsi_{k,(1)} \leq \hatpsi_{k,(2)} \leq \ldots \leq \hatpsi_{k,(M)}$.
  
 
{| align=left; style="width: 700px; background-color:#EFEFEF; font-family:'courier new';font-size:12pt;"
 
|predc1=function(t,x){ 
 
: f=50*x[1]/x[2]/(x[1]-x[3])*(exp(-x[3]*t)-exp(-x[1]*t))
 
} <br>
 
|-
 
|predc2=function(t,x){
 
: ff=50/x[1]/x[2]/x[3]*(1-exp(-x[3]*t))
 
: ff[t>x[1]]=50/x[1]/x[2]/x[3]*(1-exp(-x[3]*x[1]))*exp(-x[3]*(t[t>x[1]]-x[1]))
 
: f=ff
 
}
 
|}
 
  
  
  
We define then models ${\cal M}_1$ and ${\cal M}_2$ by assuming  constant error models:
+
<br>
::<div style="text-align: left;font-size: 11pt"><math>\begin{align}
+
==A PK example ==
{\cal M}_1 : \quad y_j & = & f_1(t_j ; \phi_1) + a_1\teps_j \\
 
{\cal M}_2  : \quad y_j & = & f_2(t_j ; \phi_2) + a_2\teps_j
 
\end{align}</math></div>
 
  
 +
In the real world, it is often not enough to look at the data, choose one possible model and estimate the parameters. The chosen structural model may or may not be "good" at representing the data. It may be good but the chosen residual error model bad, meaning that the overall model is poor, and so on. That is why in practice we may want to try out several structural and residual error models. After performing parameter estimation for each model, various assessment tasks can then be performed in order to conclude which model is best.
  
We can fit these two models to our data by computing $\hpsi_1=(\hphi_1,\hat{a}_1)$ and $\hpsi_2=(\hphi_2,\hat{a}_2)$, the MLEs of $\psi$  under models ${\cal M}_1$ and ${\cal M}_2$.
 
  
 +
<br>
 +
===The data===
  
 +
This modeling process is illustrated in detail in the following [http://en.wikipedia.org/wiki/Pharmacokinetics PK] example. Let us consider a dose D=50mg of a drug administered orally to a patient at time $t=0$. The concentration of the drug in the bloodstream is then measured at times $(t_j) = (0.5, 1,\,1.5,\,2,\,3,\,4,\,8,\,10,\,12,\,16,\,20,\,24).$ Here is the file {{Verbatim|individualFitting_data.txt}} with the data:
  
  
{| align=left; cellpadding="2" style="width: 1100px; background-color:#EFEFEF; font-family:'courier new';font-size:12pt;"
+
{| class="wikitable" align="center" style="width: 30%;margin-left:15em"
| colspan = "2" style="align:center;" | fmin1=function(x,y,t) <br> { 
+
!|      Time   ||   Concentration
: f=predc1(t,x)
 
: g=x[4] <br> e=sum( ((y-f)/g)^2+log(g^2)) <br>
 
} <br>
 
 
|-
 
|-
| colspan = "2" style="align:center;" |fmin2=function(x,y,t) <br> { 
+
|0.5     ||       0.94
: f=predc2(t,x)
 
: g=x[4] <br> e=sum( ((y-f)/g)^2+log(g^2))<br>
 
} <br>
 
|-
 
| colspan = "2" style="align:center;" | #-------- MLE --------#
 
 
|-
 
|-
|<br>
+
|   1.0     ||     1.30
|-
+
|-
| style="width: 450px; text-align:left;  | pk.nlm1=nlm(fmin1,c(0.3,6,0.2,1),y,t,hessian="true") <br> <code><div style="font-family:'courier new';font-size:12pt;">psi1=pk.nlm1$estimate</div></code> <br> pk.nlm2=nlm(fmin2,c(3,10,0.2,4),y,t,hessian="true") <br> <code><div style="font-family:'courier new';font-size:12pt;">psi2=pk.nlm2$estimate</div></code> || <div style="color:red"> > cat(" psi1 =",psi1,"\n\n") </div> <div style="color:blue">  psi1 = 0.3240916 6.001204 0.3239337 0.4366948 </div><br> <div style="color:red"> > cat(" psi2 =",psi2,"\n\n")</div>  <div style="color:blue"> psi2 = 3.203111 8.999746 0.229977 0.2555242 </div>
+
|   1.5     ||       1.64
|}
 
 
 
 
 
 
 
 
 
 
 
The estimated parameters $\hphi_1$ and $\hphi_2$ can then be used for computing predicted concentrations $\hat{f}_1(t)$ and $\hat{f}_2(t)$ under both models and for any time $t$.
 
We clearly see here that a much better fit is obtained with model ${\cal M}_2$, ''i.e.'' assuming a zero order absorption process.
 
 
 
 
 
 
 
{| align=left; cellpadding="2" style="width: 1100px; background-color:#EFEFEF; font-family:'courier new';font-size:12pt;"
 
| style="width: 600px" | tc=seq(from=0,to=25,by=0.1) <br> fc1=predc1(tc,phi1) <br> fc2=predc2(tc,phi2) <br><br> plot(t,y,ylim=c(0,4.1),xlab="time (hour)",...   
 
:: ylab="concentration (mg/l)",col="blue")
 
<br> lines(tc,fc1, type = "l", col = "green", lwd=2) <br> lines(tc,fc2, type = "l", col = "red", lwd=2) <br> abline(a=0,b=0,lty=2) <br> legend(13,4,c("observations",...
 
:: "first order absorption","zero order absorption"),...
 
:: lty=c(-1,1,1),pch=c(1,-1,-1),lwd=2,col=c("blue","green","red"))
 
||
 
 
 
[[File:individual2.png|right]]
 
|}
 
 
 
 
 
 
 
 
 
 
 
Another useful goodness of fit plot is obtained by displaying the observations $(y_j)$ versus the predictions $\hat{y}_j=f(t_j ; \hpsi)$ given by the model.
 
 
 
 
 
 
 
{| align=left; cellpadding="2" style="width: 1100px; background-color:#EFEFEF; font-family:'courier new';font-size:12pt;"
 
| style="width: 600px; text-align:left;  | pk.nlm1=nlm(fmin1,c(0.3,6,0.2,1),y,t,hessian="true") <br> f2=predc2(t,phi2) <br><br> par(mfrow= c(1,2)) <br> plot(f1,y,xlim=c(0,4),ylim=c(0,4),main="model 1") <br> abline(a=0,b=1,lty=1) <br> plot(f2,y,xlim=c(0,4),ylim=c(0,4),main="model 2") <br> abline(a=0,b=1,lty=1)
 
||
 
[[File:individual3.png|right|500px]]
 
|}
 
 
 
 
 
 
 
Bayesian Information Criteria confirm the zero order absorption process should be selected.
 
 
 
 
 
{| align=left; cellpadding="2" style="width: 1100px; background-color:#EFEFEF; font-family:'courier new';font-size:12pt;"
 
| style="width: 50%" | <code><div style ="background-color:#EFEFEF; font-family:'courier new';font-size:12pt;">deviance1=pk.nlm1$minimum + n*log(2*pi) </div></code>  bic1=deviance1+log(n)*length(psi1) <br><br> <code><div style ="background-color:#EFEFEF; font-family:'courier new';font-size:12pt;"> deviance2=pk.nlm2$minimum + n*log(2*pi) </div></code>  bic2=deviance2+log(n)*length(psi2) ||
 
| style="width: 50%" | <div style="color:red"> > cat(" bic1 =",bic1,"\n\n") </div> <div style="color:blue"> bic1 = 24.10972 </div> <br><br> <div style="color:red">  > cat(" bic2 =",bic2,"\n\n") </div> <div style="color:blue"> bic2 = 11.24769</div>
 
|}
 
 
 
 
 
 
 
 
 
 
 
 
 
We have only considered for the moment constant error models. Nevertheless, the graphic "observations vs predictions" seems to indicate that the amplitude of the residual errors increase with the prediction. We will then consider four different residual error models associated with the same structural model $f_2$.
 
 
 
 
 
{| align=left; cellpadding="3" style="width: 800px;
 
| ${\cal M}_2$, || constant error model:    ||  $y_j=f_2(t_j;\phi_2)+a_2\teps_j$
 
 
|-
 
|-
| ${\cal M}_3$, ||   proportional error model: || $y_j=f_2(t_j;\phi_3)+b_3f_2(t_j;\phi_3)\teps_j$
+
|   2.0     ||       3.38
 
|-
 
|-
| ${\cal M}_4$, |combined error model:     || $y_j=f_2(t_j;\phi_4)+(a_4+b_4f_2(t_j;\phi_4))\teps_j$
+
3.0     ||       3.72
 
|-
 
|-
|${\cal M}_5$, ||  exponential error model:   || $\log(y_j)=\log(f_2(t_j;\phi_5)) + a_5\teps_j$
+
4.0     ||       3.29
|}
 
 
 
 
 
 
 
{| align=left; style="width: 700px; background-color:#EFEFEF; font-family:'courier new';font-size:12pt;"
 
|fmin3=function(x,y,t) <br> { 
 
: f=predc2(t,x)
 
: g=x[4]*f <br> e=sum( ((y-f)/g)^2 + log(g^2)) <br>
 
} <br>
 
 
|-
 
|-
|fmin4=function(x,y,t) <br> {    
+
8.0     ||      1.31
: f=predc2(t,x)
 
: g=abs(x[4])+abs(x[5])*f <br> e=sum( ((y-f)/g)^2 + log(g^2)) <br>
 
} <br>
 
 
|-
 
|-
|fmin5=function(x,y,t) <br> { 
+
10.0     ||       0.80
: f=predc2(t,x)
 
: g=x[4] <br> e=sum( ((log(y)-log(f))/g)^2 + log(g^2)) <br>
 
}
 
|}
 
 
 
 
 
 
 
 
 
We can now compute $\hpsi_3=(\hphi_3,\hat{b}_3)$, $\hpsi_4=(\hphi_4,\hat{a}_4,,\hat{b}_4)$ and $\hpsi_5=(\hphi_5,\hat{a}_5)$, the MLEs of $\psi$  under models ${\cal M}_3$, ${\cal M}_4$ and ${\cal M}_5$.
 
 
 
 
 
 
 
{| align=left; cellpadding="2" style="width: 1100px; background-color:#EFEFEF; font-family:'courier new';font-size:12pt;"
 
| colspan = "2" style="align:center;" | fmin3=function(x,y,t) <br> {
 
: f=predc2(t,x) <br> g=x[4]*f
 
: e=sum( ((y-f)/g)^2 + log(g^2))
 
} <br>
 
 
|-
 
|-
| colspan = "2" style="align:center;" | fmin4=function(x,y,t) <br> {
+
| 12.0     ||      0.39
: f=predc2(t,x)
 
: g=abs(x[4])+abs(x[5])*f <br>e=sum( ((y-f)/g)^2 + log(g^2))
 
} <br>
 
 
|-
 
|-
| colspan = "2" style="align:center;" | fmin5=function(x,y,t) <br> { 
+
| 16.0     ||      0.31
: f=predc2(t,x)
 
: g=x[4] <br> e=sum( ((log(y)-log(f))/g)^2 + log(g^2)) 
 
} <br>
 
 
|-
 
|-
| colspan = "2" style="align:center;" | #-------- MLE --------#
+
| 20.0     ||      0.10
 
|-
 
|-
|<br>
+
24.0     ||       0.09
|-
 
| style="width: 450px; text-align:left; | pk.nlm3=nlm(fmin3,c(phi2,0.1),y,t,hessian="true") <br> <code><div style="font-family:'courier new';font-size:12pt;">psi3=pk.nlm3$estimate</div></code> <br> pk.nlm4=nlm(fmin4,c(phi2,1,0.1),y,t,hessian="true") <br> <code><div style="font-family:'courier new';font-size:12pt;">psi4=pk.nlm4$estimate</div></code> psi4[c(4,5)]=abs(psi4[c(4,5)]) <br><br> pk.nlm5=nlm(fmin5, c(phi2,0.1),y,t,hessian="true") <br> <code><div style="font-family:'courier new';font-size:12pt;"> psi5=pk.nlm5$estimate </div></code>
 
||  
 
<div style="color:red"> > cat(" psi3 =",psi3,"\n\n") </div> <div style="color:blue">  psi3 = 2.642409 11.44113 0.1838779 0.2189221 </div><br> <div style="color:red"> > cat(" psi4 =",psi4,"\n\n")</div>  <div style="color:blue"> psi4 = 2.890066 10.16836 0.2068221 0.02741416 0.1456332
 
</div><br> <div style="color:red"> > cat(" psi5 =",psi5,"\n\n")</div> <div style="color:blue"> psi5 = 2.710984 11.2744 0.188901 0.2310001 </div>
 
 
|}
 
|}
  
  
 
+
We are going to perform the analyses for this example with the free statistical software [http://www.r-project.org/  {{Verbatim|R}}]. First, we import the data and plot it to have a look:
 
+
{| cellpadding="5" cellspacing="0"
The three predicted concentrations obtained with models ${\cal M}_3$, ${\cal M}_4$  and ${\cal M}_5$ are very similar
+
| style="width: 50%" |
 
+
[[File:NewIndividual1.png|link=]]
 
+
| style="width: 50%" | {{RcodeForTable
 
+
|name=
{| align=left; cellpadding="2" style="width: 1100px; background-color:#EFEFEF; font-family:'courier new';font-size:12pt;"
+
|code=
| style="width: 600px" | tc=seq(from=0,to=25,by=0.1) <br> fc1=predc1(tc,phi1) <br> fc2=predc2(tc,phi2) <br> plot(t,y,ylim=c(0,4.1),xlab="time (hour)", ...
+
<pre style="background-color: #EFEFEF; border:none">
:: ylab="concentration (mg/l)",col = "blue")  
+
pk1=read.table("individualFitting_data.txt",header=T)  
<br> lines(tc,fc1, type = "l", col = "green", lwd=2) <br> lines(tc,fc2, type = "l", col = "red", lwd=2) <br>
+
t=pk1$time 
abline(a=0,b=0,lty=2) <br> legend(13,4,c("observations","first order absorption",...
+
y=pk1$concentration
:: "zero order absorption"), ...
+
plot(t, y, xlab="time(hour)",
:: lty=c(-1,1,1), pch=c(1,-1,-1), ...
+
    ylab="concentration(mg/l)", col="blue")  
:: lwd=2, col=c("blue","green","red"))
+
</pre> }}
||
 
[[File:individual4.png|right|500px]]
 
 
|}
 
|}
  
  
 +
<br>
  
 +
===Fitting two PK models===
  
BIC confirms that a residual error model including a proportional component should be selected.
+
We are going to consider two possible structural models that may describe the observed time-course of the concentration:
  
  
 +
<ul>
 +
* A [http://en.wikipedia.org/wiki/Multi-compartment_model#Single-compartment_model one compartment model] with first-order [http://en.wikipedia.org/wiki/Absorption_%28pharmacokinetics%29 absorption] and linear elimination:
  
{| align=left; cellpadding="2" style="width: 1100px; background-color:#EFEFEF; font-family:'courier new';font-size:12pt;"
+
{{Equation1
| <code><div style="font-family:'courier new';font-size:12pt;"> deviance3=pk.nlm3$minimum + n*log(2*pi)</div></code> bic3=deviance3 + log(n)*length(psi3) <br> <code><div style="font-family:'courier new';font-size:12pt;"> deviance4=pk.nlm4$minimum + n*log(2*pi) </div></code> bic4=deviance4 + log(n)*length(psi4) <br>
+
|equation=<math>\begin{eqnarray}
<code><div style="font-family:'courier new';font-size:12pt;"> deviance5=pk.nlm5$minimum + 2*sum(log(y)) + n*log(2*pi) </div></code>
+
\phi_1 &=& (k_a, V, k_e) \\
bic5=deviance5 + log(n)*length(psi5)
+
f_1(t ; \phi_1) &=& \frac{D\, k_a}{V(k_a-k_e)} \left( e^{-k_e \, t} - e^{-k_a \, t} \right).
||
+
\end{eqnarray}</math> }}
|<div style="color:red"> > cat(" bic3 =",bic3,"\n\n")</div> <div style="color:blue"> bic3 = 3.443607 </div> <br> <div style="color:red"> > cat(" bic4 =",bic4,"\n\n") </div> <div style="color:blue"> bic4 = 3.475841 </div> <br> <div style="color:red"> > cat(" bic5 =",bic5,"\n\n")</div><div style="color:blue">
 
bic5 = 4.108521 </div>
 
|}
 
  
  
 +
* A one compartment model with zero-order absorption and linear elimination:
  
There is no obvious difference between these three error models. We will use the combined error model ${\cal M}_4$ in the following.
+
{{Equation1
 +
|equation=<math>\begin{eqnarray}
 +
\phi_2 &=& (T_{k0}, V, k_e) \\
 +
f_2(t ; \phi_2) &=& \left\{  \begin{array}{ll}
 +
\displaystyle{ \frac{D}{V \,T_{k0} \, k_e} }\left( 1- e^{-k_e \, t} \right) & {\rm if }\ t\leq T_{k0} \\
 +
\displaystyle{ \frac{D}{V \,T_{k0} \, k_e} } \left( 1- e^{-k_e \, T_{k0} } \right)e^{-k_e \, (t- T_{k0})} & {\rm otherwise} .
 +
\end{array}
 +
\right.
 +
\end{eqnarray}</math> }}
 +
</ul>
  
A 90% confidence interval for $\psi_4$ can derived from the Hessian matrix of the objective function.
 
  
 +
We define each of these functions in {{Verbatim|R}}:
  
  
 +
{{Rcode
 +
|name=
 +
|code=
 +
<pre style="background-color: #EFEFEF; border:none">
 +
predc1=function(t,x){
 +
  f=50*x[1]/x[2]/(x[1]-x[3])*(exp(-x[3]*t)-exp(-x[1]*t))
 +
return(f)}
  
{| align=left; cellpadding="2" style="width: 1100px; background-color:#EFEFEF; font-family:'courier new';font-size:12pt;"
+
predc2=function(t,x){
| alpha=0.9 <br> df=n-length(phi4) <code><div style="font-family:'courier new';font-size:12pt;"> I4=pk.nlm4$hessian/2 </div></code> H4=solve(I4) <br> s4=sqrt(diag(H4)*n/df) <br> delta4=s4*qt(0.5+alpha/2,df) <br> ci4=matrix(c(psi4-delta4,psi4+delta4),ncol=2)
+
  f=50/x[1]/x[2]/x[3]*(1-exp(-x[3]*t))
||
+
  f[t>x[1]]=50/x[1]/x[2]/x[3]*(1-exp(-x[3]*x[1]))*exp(-x[3]*(t[t>x[1]]-x[1]))
{| align=right; cellpadding="3" style="width: 500px; color: blue; text-align:right; background-color:#EFEFEF; font-family:'courier new';font-size:12pt;"
+
return(f)} </pre>
| colspan="3" style="color:red; text-align:left" | > ci4
+
}}
|-
 
|      ||  [,1] ||    [,2]
 
|-
 
| [1,] ||  2.22576690 || 3.55436561
 
|-  
 
| [2,] || 7.93442421 || 12.40228967
 
|-
 
| [3,] ||  0.16628224 || 0.24736196
 
|-
 
| [4,] || -0.02444571 || 0.07927403
 
|-
 
| [5,] ||  0.04119983 || 0.25006660
 
|}
 
|}
 
  
 +
We then define two models ${\cal M}_1$ and ${\cal M}_2$ that assume (for now)  constant residual error models:
  
 +
{{Equation1
 +
|equation=<math>\begin{eqnarray}
 +
{\cal M}_1  : \quad y_j & = & f_1(t_j ; \phi_1) + a_1\teps_j \\
 +
{\cal M}_2  : \quad y_j & = & f_2(t_j ; \phi_2) + a_2\teps_j .
 +
\end{eqnarray}</math> }}
  
as well as a confidence interval for $f_4(t)$ using the Central Limit Theorem
+
We can fit these two models to our data by computing the MLE $\hatpsi_1=(\hatphi_1,\hat{a}_1)$ and $\hatpsi_2=(\hatphi_2,\hat{a}_2)$ of $\psi$  under each model:
  
 +
{| cellpadding="10" cellspacing="10"
 +
| style="width:50%" |
 +
{{RcodeForTable
 +
|name=
 +
|code=
 +
<pre style="background-color: #EFEFEF; border:none">
 +
fmin1=function(x,y,t){
 +
  f=predc1(t,x)
 +
  g=x[4]
 +
  e=sum( ((y-f)/g)^2 + log(g^2))
 +
return(e)}
  
 +
fmin2=function(x,y,t){
 +
  f=predc2(t,x)
 +
  g=x[4]
 +
  e=sum( ((y-f)/g)^2 + log(g^2))
 +
return(e)}
  
{| align=left; cellpadding="2" style="width: 1100px; background-color:#EFEFEF; font-family:'courier new';font-size:12pt;"
+
#--------- MLE --------------------------------
|  nlpredci=function(phi,f,H) {
 
: dphi=length(phi) <br> nf=length(f) <br> H=H*n/(n-dphi)<br> S=H[seq(1,dphi),seq(1,dphi)] <br> G=matrix(nrow=nf,ncol=dphi) <br> for (k in seq(1,dphi)) {
 
:: dk=phi[k]*(1e-5)
 
:: phid=phi
 
:: phid[k]=phi[k] + dk
 
:: fd=predc2(tc,phid)
 
:: G[,k]=(f-fd)/dk
 
:: } <br>
 
: M=rowSums((G%*%S)*G) <br> deltaf=sqrt(M)*qt(0.5+alpha/2,df)
 
} <br> <br> deltafc4=nlpredci(phi4,fc4,H4) <br>
 
||
 
[[File:individual6.png|right|600px]]
 
|-
 
| <br>
 
|-
 
| colspan="2" style="width:1100px" | plot(t,y,ylim=c(0,4.5),xlab="time (hour)",...
 
: ylab="concentration (mg/l)",col = "blue")
 
lines(tc,fc4,type ="l",col=("red",lwd=2) <br> lines(tc,fc4-deltafc4,type ="l",col="red",lwd=1,lty=3) <br>
 
lines(tc,fc4+deltafc4, type="l",col="red", lwd=1, lty=3) <br>
 
abline(a=0,b=0,lty=2) <br> legend(10.5,4.5,c("observed concentrations","predicted concentration",...
 
: "IC for predicted concentration"),lty=c(-1,1,3),pch=c(1,-1,-1),lwd=c(2,2,1),col=c("blue","red","red"))
 
|}
 
  
 +
pk.nlm1=nlm(fmin1, c(0.3,6,0.2,1), y, t, hessian="true")
 +
psi1=pk.nlm1$estimate
  
 +
pk.nlm2=nlm(fmin2, c(3,10,0.2,4), y, t, hessian="true")
 +
psi2=pk.nlm2$estimate
 +
</pre>
 +
}}
 +
| style="width:50%" |
 +
:Here are the parameter estimation results:
  
Alternatively, prediction intervals for $\hpsi_4$, $\hat{f}_4(t;\hpsi_4)$ and new observations at any time can  be estimated by Monte-Carlo
 
  
 +
{{JustCodeForTable
 +
|code=
 +
<pre style="background-color: #EFEFEF; border:none; color:blue">
 +
> cat(" psi1 =",psi1,"\n\n")
 +
psi1 = 0.3240916 6.001204 0.3239337 0.4366948
  
{| align=left;  style="width: 1100px; background-color:#EFEFEF; font-family:'courier new';font-size:12pt;"
+
> cat(" psi2 =",psi2,"\n\n")
| f=predc2(t,phi4){
+
psi2 = 3.203111 8.999746 0.229977 0.2555242
: a4=psi4[4] <br> b4=psi4[5] <br> g=a4+b4*f <br> dpsi=length(psi4) <br> nc=length(tc) <br> N=1000 <br> qalpha=c(0.5 - alpha/2,0.5 + alpha/2)
+
</pre> }}
: PSI=matrix(nrow=N,ncol=dpsi) <br> FC=matrix(nrow=N,ncol=nc) <br> Y=matrix(nrow=N,ncol=nc) <br> for (k in seq(1,N)){
 
:: eps=rnorm(n) <br> ys=f+g*eps <br> pk.nlm=nlm(fmin4, psi4, ys, t) <br> psie=pk.nlm$estimate <br> psie[c(4,5)]=abs(psie[c(4,5)])
 
:: PSI[k,]=psie <br> fce=predc2(tc,psie[c(1,2,3)]) <br> FC[k,]=fce <br> gce=a4+b4*fce <br> Y[k,]=fce + gce*rnorm(1)
 
: } <br>
 
|-
 
| ci4s=matrix(nrow=dpsi,ncol=2) <br> for (k in seq(1,dpsi)) {
 
: ci4s[k,]=quantile(PSI[,k],qalpha,names=FALSE)
 
} <br>
 
|-
 
| m4s=colMeans(PSI) <br> sd4s=apply(PSI,2,sd) <br>
 
|-
 
| cifc4s=matrix(nrow=nc,ncol=2) <br> for (k in seq(1,nc)){
 
: cifc4s[k,]=quantile(FC[,k],qalpha,names=FALSE)
 
} <br>
 
|-
 
| ciy4s=matrix(nrow=nc,ncol=2) <br> for (k in seq(1,nc)){
 
: ciy4s[k,]=quantile(Y[,k],qalpha,names=FALSE)
 
} <br>
 
|-
 
| par(mfrow= c(1,1)) <br> plot(t,y,ylim=c(0,4.5),xlab="time (hour)",ylab="concentration (mg/l)",col="blue") <br> lines(tc,fc4, type="l", col="red", lwd=2) <br> lines(tc,cifc4s[,1], type="l", col="red", lwd=1, lty=3) <br> lines(tc,cifc4s[,2], type="l", col="red",lwd=1,lty=3) <br>
 
lines(tc,ciy4s[,1], type="l", col="green",lwd=1,lty=3) <br> lines(tc,ciy4s[,2], type ="l",col="green",lwd=1,lty=3) <br> abline(a=0,b=0,lty=2) <br>
 
legend(10.5,4.5,c("observed concentrations","predicted concentration", ...
 
: "IC for predicted concentration", "IC for observed concentrations"), lty=c(-1,1,3,3),...
 
: pch=c(1,-1,-1,-1), lwd=c(2,2,1,1), col=c("blue","red","red","green")) <br>
 
|-
 
|
 
{| align=center; cellpadding="2"; style="width: 1100px;
 
| [[ File:individual7.png|left|550px]]
 
||
 
{| align=right; cellpadding="3" style="width: 550px; color: blue; text-align:right; background-color:#EFEFEF; font-family:'courier new';font-size:12pt;"
 
| colspan="3" style="color:red; text-align:left" | > ci4s
 
|-
 
|      || [,1]    ||    [,2]
 
|-
 
| [1,] || 2.350653e+00 || 3.53526320
 
|-
 
| [2,] || 8.350764e+00 || 12.04910579
 
|-
 
| [3,] || 1.818431e-01 || 0.24156832
 
|-
 
| [4,] || 5.445459e-09 || 0.08819339
 
|-
 
| [5,] || 1.563625e-02 || 0.19638889
 
|}
 
|}
 
 
|}
 
|}
  
=='''References'''==
 
  
 +
<br>
  
 +
===Assessing and selecting the PK model===
  
 +
The estimated parameters $\hatphi_1$ and $\hatphi_2$ can then be used for computing the predicted concentrations $\hat{f}_1(t)$ and $\hat{f}_2(t)$ under both models at any time $t$. These curves can then be plotted over the original data and compared:
  
 +
{| cellpadding="5" cellspacing="0"
 +
| style="width:50%" |
 +
[[File:New_Individual2.png|link=]]
 +
| style="width:50%" |
 +
{{RcodeForTable
 +
|name=
 +
|code=
 +
<pre style="background-color: #EFEFEF; border:none">
 +
tc=seq(from=0,to=25,by=0.1)
 +
phi1=psi1[c(1,2,3)]
 +
fc1=predc1(tc,phi1)
 +
phi2=psi2[c(1,2,3)]
 +
fc2=predc2(tc,phi2)
  
 +
plot(t,y,ylim=c(0,4.1),xlab="time (hour)",
 +
          ylab="concentration (mg/l)",col = "blue")
 +
lines(tc,fc1, type = "l", col = "green", lwd=2)
 +
lines(tc,fc2, type = "l", col = "red", lwd=2)
 +
abline(a=0,b=0,lty=2)
 +
legend(13,4,c("observations","first order absorption",
 +
          "zero order absorption"),
 +
lty=c(-1,1,1), pch=c(1,-1,-1), lwd=2, col=c("blue","green","red"))
 +
</pre> }}
 +
|}
  
 +
We clearly see that a much better fit is obtained with model ${\cal M}_2$, i.e., the one assuming a zero-order absorption process.
  
 +
Another useful goodness-of-fit plot is obtained by displaying the observations $(y_j)$ versus the predictions $\hat{y}_j=f(t_j ; \hatpsi)$ given by the models:
  
 +
{| cellpadding="5" cellspacing="0"
 +
| style="width:50%" |
 +
[[File:individual3.png|link=]]
 +
| style="width:50%" |
 +
{{RcodeForTable
 +
|name=
 +
|code=
 +
<pre style="background-color: #EFEFEF; border:none">
 +
f1=predc1(t,phi1)
 +
f2=predc2(t,phi2)
  
 +
par(mfrow= c(1,2))
 +
plot(f1,y,xlim=c(0,4),ylim=c(0,4),main="model 1")
 +
abline(a=0,b=1,lty=1)
 +
plot(f2,y,xlim=c(0,4),ylim=c(0,4),main="model 2")
 +
abline(a=0,b=1,lty=1)
 +
</pre> }}
 +
|}
  
  
  
 +
<br>
  
 +
===Model selection===
  
  
 +
Again, ${\cal M}_2$ would seem to have a slight edge. This can be tested more analytically using the [http://en.wikipedia.org/wiki/Bayesian_information_criterion Bayesian Information Criteria] (BIC):
  
 +
{| cellpadding="10" cellspacing="10"
 +
| style="width:50%" |
 +
{{RcodeForTable
 +
|name=
 +
|code=
 +
<pre style="background-color: #EFEFEF; border:none">
 +
deviance1=pk.nlm1$minimum + n*log(2*pi)
 +
bic1=deviance1+log(n)*length(psi1)
 +
deviance2=pk.nlm2$minimum + n*log(2*pi)
 +
bic2=deviance2+log(n)*length(psi2)
 +
</pre> }}
 +
| style="width:50%" |
 +
{{JustCodeForTable
 +
|code=
 +
<pre style="background-color: #EFEFEF; border:none; color:blue">
 +
> cat(" bic1 =",bic1,"\n\n")
 +
bic1 = 24.10972
  
 +
> cat(" bic2 =",bic2,"\n\n")
 +
bic2 = 11.24769
 +
</pre> }}
 +
|}
  
 +
A smaller BIC is better. Therefore, this also suggests that model ${\cal M}_2$ should be selected.
  
<!-- ---------------- OLD CODE ------------------!>
 
  
An example of continuous data from a single individual
 
  
:[[Image:graf1.png|center|600px]]
+
<br>
  
 +
===Fitting different error models===
  
A model for continuous data:
 
  
::<div style="text-align: left;font-size: 12pt"><math>\begin{align}
+
For the moment, we have only considered  constant error models. However, the "observations vs predictions" figure hints that the amplitude of the residual errors may increase with the size of the predicted value. Let us therefore take a closer look at four different residual error models, each of which we will associate with the "best" structural model $f_2$:
y_{j} &=& f(t_j ; \psi) + \varepsilon_j \quad ; \quad  1\leq j \leq n
 
\\
 
&=& f(t_j ; \psi) + g(t_j ; \psi) \bar{\varepsilon_j}
 
\end{align}</math></div>
 
  
 +
{| cellpadding="2" cellspacing="8" style="text-align:left; margin-left:4%"
 +
|${\cal M}_2$ || Constant error model: || $y_j=f_2(t_j;\phi_2)+a_2\teps_j$
 +
|-
 +
|${\cal M}_3$ || Proportional error model: || $y_j=f_2(t_j;\phi_3)+b_3f_2(t_j;\phi_3)\teps_j$
 +
|-
 +
|${\cal M}_4$ || Combined error model: || $y_j=f_2(t_j;\phi_4)+(a_4+b_4f_2(t_j;\phi_4))\teps_j$
 +
|-
 +
|${\cal M}_5$ || Exponential error model: || $\log(y_j)=\log(f_2(t_j;\phi_5)) + a_5\teps_j$.
 +
|}
  
* $f$ : structural model
+
The three new ones need to be entered into {{Verbatim|R}}:
* $\psi=(\psi_1, \psi_2, \ldots, \psi_d)$ :  vector of parameters
 
* $(t_1,t_2,\ldots , t_n)$ : observation times
 
* $(\varepsilon_j, \varepsilon_2, \ldots, \varepsilon_n)$ : residual errors ($\Epsilon({\varepsilon_j}) =0$)
 
* $g$ : { residual error model}
 
* $(\bar{\varepsilon_1}, \bar{\varepsilon_2}, \ldots, \bar{\varepsilon_n})$ : normalized residual errors $(Var(\bar{\varepsilon_j}) =1)$
 
  
  
Some tasks in the context of modelling, ''i.e.'' when a vector of observations $(y_j)$ is available:
+
{{Rcode
 +
|name=
 +
|code=
 +
<pre style="background-color: #EFEFEF; border:none">
 +
fmin3=function(x,y,t){
 +
  f=predc2(t,x)
 +
  g=x[4]*f
 +
  e=sum( ((y-f)/g)^2 + log(g^2))
 +
return(e)}
  
 +
fmin4=function(x,y,t){
 +
  f=predc2(t,x)
 +
  g=abs(x[4])+abs(x[5])*f
 +
  e=sum( ((y-f)/g)^2 + log(g^2))
 +
return(e)}
  
* Simulate a vector of observations $(y_j)$ for a given model and a given parameter  $\psi$,
+
fmin5=function(x,y,t){
* Estimate the vector of parameters $\psi$ for a given model,
+
  f=predc2(t,x)
* Select the structural model $f$
+
  g=x[4]
* Select the residual error model $g$
+
  e=sum( ((log(y)-log(f))/g)^2 + log(g^2))
* Assess/validate the selected model
+
return(e)}
 +
</pre> }}
  
  
Maximum likelihood estimation of the parameters: $\hat{\psi}$ maximizes $L(\psi ; y_1,y_2,\ldots,y_j)$
+
We can now compute the MLE $\hatpsi_3=(\hatphi_3,\hat{b}_3)$, $\hatpsi_4=(\hatphi_4,\hat{a}_4,\hat{b}_4)$ and $\hatpsi_5=(\hatphi_5,\hat{a}_5)$ of $\psi$  under models ${\cal M}_3$, ${\cal M}_4$  and ${\cal M}_5$:
  
where
+
{| cellpadding="10" cellspacing="10" 
 +
|style="width:50%" |
 +
{{RcodeForTable
 +
|name=
 +
|code=
 +
<pre style="background-color: #EFEFEF; border:none">
 +
#----------------  MLE  -------------------
  
 +
pk.nlm3=nlm(fmin3, c(phi2,0.1), y, t,
 +
      hessian="true")
 +
psi3=pk.nlm3$estimate
  
::<div style="text-align: left;font-size: 12pt"><math>
+
pk.nlm4=nlm(fmin4, c(phi2,1,0.1), y, t, 
L(\psi ; y_1,y_2,\ldots,y_j) {\overset{def}{=}}  p_Y( y_1,y_2,\ldots,y_j ; \psi)
+
      hessian="true")
</math></div>
+
psi4=pk.nlm4$estimate
 +
psi4[c(4,5)]=abs(psi4[c(4,5)])
  
 +
pk.nlm5=nlm(fmin5, c(phi2,0.1), y, t, 
 +
      hessian="true")
 +
psi5=pk.nlm5$estimate 
 +
</pre> }}
 +
|style="width:50%" |
 +
{{JustCodeForTable
 +
|code=<pre style="background-color: #EFEFEF; border:none; color:blue">
 +
> cat(" psi3 =",psi3,"\n\n")
 +
psi3 = 2.642409 11.44113 0.1838779 0.2189221
  
If we assume that $\bar{\varepsilon_i} \sim_{i.i.d} {\cal N}(0,1)$, then, the $y_i$'s are independent and
+
> cat(" psi4 =",psi4,"\n\n")
 +
psi4 = 2.890066 10.16836 0.2068221 0.02741416 0.1456332
  
 +
> cat(" psi5 =",psi5,"\n\n")
 +
psi5 = 2.710984 11.2744 0.188901 0.2310001
 +
</pre> }}
 +
|}
  
::<div style="text-align: left;font-size: 12pt"><math>
 
y_{j} \sim {\cal N}(f(t_j ; \psi) , g(t_j ; \psi)^2)
 
</math></div>
 
  
 +
<br>
  
and the p.d.f of $(y_1, y_2, \ldots y_n)$ can  be computed:
+
===Selecting the error model===
  
 +
As before, these curves can be plotted over the original data and compared:
  
::<div style="text-align: left;font-size: 12pt"><math>\begin{align}
 
p_Y(y_1, y_2, \ldots y_n ; \psi) &=& \prod_{j=1}^n p_{Y_j}(y_j ; \psi) \\ \\
 
&&  \frac{e^{-\frac{1}{2} \sum_{j=1}^n \left( \frac{y_j - f(t_j ; \psi)}{g(t_j ; \psi)} \right)^2}}{\prod_{j=1}^n \sqrt{2\pi g(t_j ; \psi)}}
 
\end{align}</math></div>
 
  
 +
{| cellpadding="5" cellspacing="0"
 +
|style="width=50%"|
 +
[[File:New_Individual4.png|link=]]
 +
|style="width=50%"|
 +
{{RcodeForTable
 +
|name=
 +
|code=
 +
<pre style="background-color: #EFEFEF; border:none">
 +
phi3=psi3[c(1,2,3)]
 +
fc3=predc2(tc,phi3)
 +
phi4=psi4[c(1,2,3)]
 +
fc4=predc2(tc,phi4)
 +
phi5=psi5[c(1,2,3)]
 +
fc5=predc2(tc,phi5)
  
 +
par(mfrow= c(1,1))
 +
plot(t,y,ylim=c(0,4.1),xlab="time (hour)",ylab="concentration (mg/l)",
 +
        col = "blue")
 +
lines(tc,fc2, type = "l", col = "red", lwd=2)
 +
lines(tc,fc3, type = "l", col = "green", lwd=2)
 +
lines(tc,fc4, type = "l", col = "cyan", lwd=2)
 +
lines(tc,fc5, type = "l", col = "magenta", lwd=2)
 +
abline(a=0,b=0,lty=2)
 +
legend(13,4,c("observations","constant error model",
 +
        "proportional error model","combined error model","exponential error model"),
 +
lty=c(-1,1,1,1,1), pch=c(1,-1,-1,-1,-1), lwd=2,
 +
        col=c("blue","red","green","cyan","magenta"))
 +
</pre> }}
 +
|}
  
Maximizing the likelihood is equivalent to minimizing the deviance (-2 $\times$ log-likelihood) which plays here the role of the objective function:
 
  
::<div style="text-align: left;font-size: 12pt"><math>\begin{align}
+
As you can see, the three predicted concentrations obtained with models ${\cal M}_3$, ${\cal M}_4$ and ${\cal M}_5$ are quite similar. We now calculate the BIC for each:
\hat{\psi} = \argmin_{\psi} \left\{ \sum_{j=1}^n \log(g(t_j ; \psi)^2) + \sum_{j=1}^n \left( \frac{y_j - f(t_j ; \psi)}{g(t_j ; \psi) }\right)^2 \right \}
 
\end{align}</math></div>
 
  
  
and the deviance is therefore
+
{| cellpadding="10" cellspacing="10"
 +
|style="width=50%"|
 +
{{RcodeForTable
 +
|name=
 +
|code=
 +
<pre style="background-color: #EFEFEF; border:none">
 +
deviance3=pk.nlm3$minimum + n*log(2*pi)
 +
bic3=deviance3 + log(n)*length(psi3)
 +
deviance4=pk.nlm4$minimum + n*log(2*pi)
 +
bic4=deviance4 + log(n)*length(psi4)
 +
deviance5=pk.nlm5$minimum + 2*sum(log(y)) + n*log(2*pi)
 +
bic5=deviance5 + log(n)*length(psi5)
 +
</pre> }}
 +
|style="width=50%"|
 +
{{JustCodeForTable
 +
|code=
 +
<pre style="background-color: #EFEFEF; border:none; color:blue">
 +
> cat(" bic3 =",bic3,"\n\n")
 +
bic3 = 3.443607
  
 +
> cat(" bic4 =",bic4,"\n\n")
 +
bic4 = 3.475841
  
::<div style="text-align: left;font-size: 12pt"><math>\begin{align}
+
> cat(" bic5 =",bic5,"\n\n")
-2 LL(\hat{\psi} ; y_1,y_2,\ldots,y_j) = \sum_{j=1}^n \log(g(t_j ; \hat{\psi})^2+ \sum_{j=1}^n \left(\frac{y_j - f(t_j ; \hat{\psi})}{g(t_j ; \hat{\psi})}\right)^2 +n\log(2\pi)
+
  bic5 = 4.108521
\end{align}</math></div>
+
</pre> }}
 +
|}  
  
 +
All of these BIC are lower than the constant residual error one. BIC selects the residual error model ${\cal M}_3$ with a proportional component.
  
 +
There is not a large difference between these three error models, though the proportional and combined error models give the smallest and essentially identical BIC.  We decide to use the combined error model ${\cal M}_4$ in the following (the same types of analysis could be done with the proportional error model).
  
This minimization problem usually does not have an analytical solution for a non linear model. Some optimization procedure should be used.
+
A 90% confidence interval for $\psi_4$ can derived from the Hessian (i.e., the square matrix of second-order partial derivatives)  of the objective function (i.e., -2 $\times \ LL$):
  
For a constant error model ($y_{j} = f(t_j ; \phi)  + a \, \bar{\varepsilon_j}$), we have
 
  
 +
{| cellpadding="10" cellspacing="10"
 +
|style="width=50%"|
 +
{{RcodeForTable
 +
|name=
 +
|code=
 +
<pre style="background-color: #EFEFEF; border:none">
 +
ialpha=0.9
 +
df=n-length(phi4)
 +
I4=pk.nlm4$hessian/2
 +
H4=solve(I4)
 +
s4=sqrt(diag(H4)*n/df)
 +
delta4=s4*qt(0.5+ialpha/2, df)
 +
ci4=matrix(c(psi4-delta4,psi4+delta4),ncol=2)
 +
</pre> }}
 +
|style="width=50%"|
 +
{{JustCodeForTable
 +
|code=
 +
<pre style="background-color: #EFEFEF; border:none; color:blue">
 +
> ci4
 +
            [,1]        [,2]
 +
[1,]  2.22576690  3.55436561
 +
[2,]  7.93442421 12.40228967
 +
[3,]  0.16628224  0.24736196
 +
[4,] -0.02444571  0.07927403
 +
[5,]  0.04119983  0.25006660
 +
</pre>}}
 +
|}
  
::<div style="text-align: left;font-size: 12pt"><math>\begin{align}
 
\hat{\phi} &=& \argmin_{\psi}  \sum_{j=1}^n \left( y_j - f(t_j ; \phi)\right)^2 \\ \\
 
\hat{a}&=& \frac{1}{n}\sum_{j=1}^n \left( y_j - f(t_j ; \hat{\phi})\right)^2 \\ \\
 
-2 LL(\hat{\psi} ; y_1,y_2,\ldots,y_j) &=&  \sum_{j=1}^n \log(\hat{a}^2) + n +n\log(2\pi)
 
\end{align}</math></div>
 
  
 +
We can also calculate a 90% confidence interval for $f_4(t)$ using the [http://en.wikipedia.org/wiki/Central_limit_theorem Central Limit Theorem] (see [[#intro_individualCLT|(3)]]):
  
A linear model has the form
 
  
::<div style="text-align: left;font-size: 12pt"><math>
+
{{Rcode
y_{j} = F \, \phi + a \, \bar{\varepsilon_j}
+
|name=
</math></div>
+
|code=
 +
<pre style="background-color: #EFEFEF; border:none">
 +
nlpredci=function(phi,f,H){
 +
  dphi=length(phi)
 +
  nf=length(f)
 +
  H=H*n/(n-dphi)
 +
  S=H[seq(1,dphi),seq(1,dphi)]
 +
  G=matrix(nrow=nf,ncol=dphi)
 +
  for (k in seq(1,dphi)) {
 +
    dk=phi[k]*(1e-5)
 +
    phid=phi
 +
    phid[k]=phi[k] + dk
 +
    fd=predc2(tc,phid)
 +
    G[,k]=(f-fd)/dk
 +
  }
 +
  M=rowSums((G%*%S)*G)
 +
  deltaf=sqrt(M)*qt(0.5+alpha/2,df)
 +
return(deltaf)}
  
 +
deltafc4=nlpredci(phi4,fc4,H4)
 +
</pre>}}
  
The solution has then a close form
+
This can then be plotted:
  
::<div style="text-align: left;font-size: 12pt"><math>\begin{align}
 
\hat{\phi} &=& (F^\prime F)^{-1} F^\prime y \\
 
\hat{a}&=& \frac{1}{n}\sum_{j=1}^n \left( y_j - F \hat{\phi})\right)^2
 
\end{align}</math></div>
 
  
 +
{| cellpadding="5" cellspacing="0"
 +
|style="width=50%"|
 +
[[File:NewIndividual6.png|link=]]
 +
|style="width=50%"|
 +
{{RcodeForTable
 +
|name=
 +
|code=
 +
<pre style="background-color: #EFEFEF; border:none">
 +
plot(t,y,ylim=c(0,4.5), xlab="time (hour)",
 +
      ylab="concentration (mg/l)", col="blue")
 +
lines(tc,fc4, type = "l",col = "red",lwd=2)
 +
lines(tc, fc4-deltafc4, type = "l",
 +
      col = "red" ,lwd=1, lty=3)
 +
lines(tc,fc4+deltafc4,type = "l",
 +
      col = "red", lwd=1, lty=3)
 +
abline(a=0,b=0,lty=2)
 +
legend(10.5,4.5,c("observed concentrations",
 +
      "predicted concentration",
 +
      "CI for predicted concentration"),
 +
      lty=c(-1,1,3),pch=c(1,-1,-1),lwd=c(2,2,1),
 +
      col=c("blue","red","red"))
 +
</pre> }}
 +
|}
  
 +
Alternatively, prediction intervals for $\hatpsi_4$, $\hat{f}_4(t;\hatpsi_4)$ and new observations for any time $t$ can be estimated by Monte Carlo simulation:
  
  
 +
{{Rcode
 +
|name=
 +
|code=
 +
<pre style="background-color: #EFEFEF; border:none">
 +
f=predc2(t,phi4)
 +
a4=psi4[4]
 +
b4=psi4[5]
 +
g=a4+b4*f
 +
dpsi=length(psi4)
 +
nc=length(tc)
 +
N=1000
 +
qalpha=c(0.5 - alpha/2,0.5 + alpha/2)
 +
PSI=matrix(nrow=N,ncol=dpsi)
 +
FC=matrix(nrow=N,ncol=nc)
 +
Y=matrix(nrow=N,ncol=nc)
 +
for (k in seq(1,N)) {
 +
  eps=rnorm(n)
 +
  ys=f+g*eps
 +
  pk.nlm=nlm(fmin4, psi4, ys, t)
 +
  psie=pk.nlm$estimate
 +
  psie[c(4,5)]=abs(psie[c(4,5)])
 +
  PSI[k,]=psie
 +
  fce=predc2(tc,psie[c(1,2,3)])
 +
  FC[k,]=fce
 +
  gce=a4+b4*fce
 +
  Y[k,]=fce + gce*rnorm(1)
 +
}
  
==='''''A PK  example'''''===
+
ci4s=matrix(nrow=dpsi,ncol=2)
 +
for (k in seq(1,dpsi)){
 +
  ci4s[k,]=quantile(PSI[,k],qalpha,names=FALSE)
 +
}
 +
m4s=colMeans(PSI)
 +
sd4s=apply(PSI,2,sd)
  
 +
cifc4s=matrix(nrow=nc,ncol=2)
 +
for (k in seq(1,nc)){
 +
  cifc4s[k,]=quantile(FC[,k],qalpha,names=FALSE)
 +
}
  
 +
ciy4s=matrix(nrow=nc,ncol=2)
 +
for (k in seq(1,nc)){
 +
  ciy4s[k,]=quantile(Y[,k],qalpha,names=FALSE)
 +
}
  
A dose of 100 mg of a drug is administrated to a patient as an intravenous (IV) bolus at time 0 and concentrations of the drug are measured every hour during 15 hours.
+
par(mfrow= c(1,1))
 +
plot(t,y,ylim=c(0,4.5),xlab="time (hour)",
 +
      ylab="concentration (mg/l)",col = "blue")
 +
lines(tc,fc4, type = "l", col = "red", lwd=2)
 +
lines(tc,cifc4s[,1], type = "l", col = "red", lwd=1, lty=3)
 +
lines(tc,cifc4s[,2], type = "l", col = "red", lwd=1, lty=3)
 +
lines(tc,ciy4s[,1], type = "l", col = "green", lwd=1, lty=3)
 +
lines(tc,ciy4s[,2], type = "l", col = "green", lwd=1, lty=3)
 +
abline(a=0,b=0,lty=2)
 +
legend(10.5,4.5,c("observed concentrations", "predicted concentration",
 +
      "CI for predicted concentration", "CI for observed concentrations"),
 +
      lty=c(-1,1,3,3), pch=c(1,-1,-1,-1), lwd=c(2,2,1,1), col=c("blue","red","red","green"))
 +
</pre> }}
  
  
[[Image:graf1.png|center|500px]]
+
{| cellpadding="5" cellspacing="0"
 +
|style="width=50%"|
 +
[[File:NewIndividual7.png|link=]]
 +
|style="width=50%"|
 +
{{JustCodeForTable
 +
|code=
 +
<pre style="background-color: #EFEFEF; border:none; color:blue">
 +
> ci4s
 +
            [,1]        [,2]
 +
[1,] 2.350653e+00  3.53526320
 +
[2,] 8.350764e+00 12.04910579
 +
[3,] 1.818431e-01  0.24156832
 +
[4,] 5.445459e-09  0.08819339
 +
[5,] 1.563625e-02  0.19638889
 +
</pre> }}
 +
|}
  
  
We consider the three following structural models:
+
The R code and input data used in this section can be downloaded here: {{filepath:R_IndividualFitting.rar}}.
 +
<br>
  
;1. One compartment model
+
==Bibliography==
  
::<div style="text-align: left;font-size: 12pt"><math>
 
f_1(t ; V,k_e) = \frac{D}{V} e^{-k_e \, t}
 
</math></div>
 
  
 +
<bibtex>
 +
@book{buonaccorsi2010measurement,
 +
  title={Measurement Error: Models, Methods, and Applications},
 +
  author={Buonaccorsi, J.P.},
 +
  isbn={9781420066586},
 +
  lccn={2009048849},
 +
  series={Chapman & Hall/CRC Interdisciplinary Statistics},
 +
  url={http://books.google.fr/books?id=QVtVmaCqLHMC},
 +
  year={2010},
 +
  publisher={Taylor & Francis}
 +
}
 +
</bibtex><bibtex>
 +
@book{carroll2010measurement,
 +
  title={Measurement Error in Nonlinear Models: A Modern Perspective, Second Edition},
 +
  author={Carroll, R.J. and Ruppert, D. and Stefanski, L.A. and Crainiceanu, C.M.},
 +
  isbn={9781420010138},
 +
  lccn={2006045485},
 +
  series={Chapman & Hall/CRC Monographs on Statistics & Applied Probability},
 +
  url={http://books.google.fr/books?id=9kBx5CPZCqkC},
 +
  year={2010},
 +
  publisher={Taylor & Francis}
 +
}
 +
</bibtex>
  
 
+
<bibtex>
;2. Two compartments model
+
@book{fitzmaurice2004applied,
 
+
  title={Applied Longitudinal Analysis},
::<div style="text-align: left;font-size: 12pt"><math>
+
  author={Fitzmaurice, G.M. and Laird, N.M. and Ware, J.H.},
f_2(t ; V_1,V_2,k_1,k_2) = \frac{D}{V_1} e^{-k_1 \, t} + \frac{D}{V_2} e^{-k_2 \, t}
+
  isbn={9780471214878},
</math></div>
+
  lccn={04040891},
 
+
  series={Wiley Series in Probability and Statistics},
 
+
  url={http://books.google.fr/books?id=gCoTIFejMgYC},
 
+
  year={2004},
;3. Polynomial model
+
  publisher={Wiley}
 
+
}
::<div style="text-align: left;font-size: 12pt"><math>
+
</bibtex>
f_3(t ; V,\alpha,\beta,\gamma) = \frac{1}{V}(D-\alpha t - \beta t^2 - \gamma t^3)
+
<bibtex>
</math></div>
+
@book{gallant2009nonlinear,
 
+
  title={Nonlinear Statistical Models},
 
+
  author={Gallant, A.R.},
 
+
  isbn={9780470317372},
and the four following residual error models:
+
  series={Wiley Series in Probability and Statistics},
{| align=left; style="width: 400px" cellpadding="8" cellspacing="0"
+
  url={http://books.google.fr/books?id=imv-NMozseEC},
   | - constant error model  || $g=a$,
+
  year={2009},
   |-
+
  publisher={Wiley}
   | - proportional error model || $g=b\, f$,
+
}
   |-
+
</bibtex>
   | - combined error model   || $g=a+b f$,
+
<bibtex>
|}
+
@book{huet2003statistical,
 
+
  title={Statistical tools for nonlinear regression: a practical guide with S-PLUS and R examples},
 +
  author={Huet, S. and Bouvier, A. and Poursat, M.A. and Jolivet, E.},
 +
  year={2003},
 +
  publisher={Springer}
 +
}
 +
</bibtex>
 +
<bibtex>
 +
@book{ritz2008nonlinear,
 +
  title={Nonlinear regression with R},
 +
  author={Ritz, C. and Streibig, J.C.},
 +
  volume={33},
 +
  year={2008},
 +
  publisher={Springer New York}
 +
}
 +
</bibtex>
 +
<bibtex>
 +
@book{ross1990nonlinear,
 +
  title={Nonlinear estimation},
 +
  author={Ross, G.J.S.},
 +
  isbn={9780387972787},
 +
  lccn={90032797},
 +
  series={Springer series in statistics},
 +
  url={http://books.google.fr/books?id=7LkyzdLMghIC},
 +
  year={1990},
 +
  publisher={Springer-Verlag}
 +
}
 +
</bibtex>
 +
<bibtex>
 +
@book{seber2003nonlinear,
 +
  title={Nonlinear Regression},
 +
  author={Seber, G.A.F. and Wild, C.J.},
 +
  isbn={9780471471356},
 +
  lccn={88017194},
 +
  series={Wiley Series in Probability and Statistics},
 +
  url={http://books.google.fr/books?id=YBYlCpBNo\_cC},
 +
  year={2003},
 +
  publisher={Wiley}
 +
}
 +
</bibtex><bibtex>
 +
@article{serroyen2009nonlinear,
 +
  title={Nonlinear models for longitudinal data},
 +
  author={Serroyen, J. and Molenberghs, G. and Verbeke, G. and Davidian, M. },
 +
  journal={The American Statistician},
 +
  volume={63},
 +
  number={4},
 +
  pages={378-388},
 +
  year={2009},
 +
   publisher={Taylor & Francis}
 +
}
 +
</bibtex>
 +
<bibtex>
 +
@book{wolberg2006data,
 +
   title={Data analysis using the method of least squares: extracting the most information from experiments},
 +
   author={Wolberg, J.R.},
 +
   volume={1},
 +
   year={2006},
 +
   publisher={Springer Berlin, Germany}
 +
}
 +
</bibtex>
  
  
<u>Extension:</u> '''$u(y_j)$''' normally distributed instead of $y_j$
+
{{Back&Next
 
+
|linkBack=Overview
::<div style="text-align: left;font-size: 12pt"><math>
+
|linkNext=What is a model? A joint probability distribution! }}
u(y_{j}) = u(f(t_j ; \psi)) +  g(t_j ; \psi)\bar{\varepsilon_j} \quad ; \quad  1\leq j \leq n
 
</math></div>
 
 
 
{| align=left; style="width: 400px" cellpadding="8" cellspacing="0"
 
  |  - exponential error model || $\log(y)=\log(f) + a\, \bar{\varepsilon}$
 
|}
 

Latest revision as of 14:58, 28 August 2013

Overview

Before we start looking at modeling a whole population at the same time, we are going to consider only one individual from that population. Much of the basic methodology for modeling one individual follows through to population modeling. We will see that when stepping up from one individual to a population, the difference is that some parameters shared by individuals are considered to be drawn from a probability distribution.

Let us begin with a simple example. An individual receives 100mg of a drug at time $t=0$. At that time and then every hour for fifteen hours, the concentration of a marker in the bloodstream is measured and plotted against time:

New Individual1.png

We aim to find a mathematical model to describe what we see in the figure. The eventual goal is then to extend this approach to the simultaneous modeling of a whole population.



Model and methods for the individual approach


Defining a model

In our example, the concentration is a continuous variable, so we will try to use continuous functions to model it. Different types of data (e.g., count data, categorical data, time-to-event data, etc.) require different types of models. All of these data types will be considered in due time, but for now let us concentrate on a continuous data model.

A model for continuous data can be represented mathematically as follows:

\( y_{j} = f(t_j ; \psi) + e_j, \quad \quad 1\leq j \leq n, \)

where:


  • $f$ is called the structural model. It corresponds to the basic type of curve we suspect the data is following, e.g., linear, logarithmic, exponential, etc. Sometimes, a model of the associated biological processes leads to equations that define the curve's shape.
  • $(t_1,t_2,\ldots , t_n)$ is the vector of observation times. Here, $t_1 = 0$ hours and $t_n = t_{16} = 15$ hours.
  • $\psi=(\psi_1, \psi_2, \ldots, \psi_d)$ is a vector of $d$ parameters that influences the value of $f$.
  • $(e_1, e_2, \ldots, e_n)$ are called the residual errors. Usually, we suppose that they come from some centered probability distribution: $\esp{e_j} =0$.


In fact, we usually state a continuous data model in a slightly more flexible way:

\( y_{j} = f(t_j ; \psi) + g(t_j ; \psi)\teps_j , \quad \quad 1\leq j \leq n, \)
(1)

where now:


    • $g$ is called the residual error model. It may be a function of the time $t_j$ and parameters $\psi$.
    • $(\teps_1, \teps_2, \ldots, \teps_n)$ are the normalized residual errors. We suppose that these come from a probability distribution which is centered and has unit variance: $\esp{\teps_j} = 0$ and $\var{\teps_j} =1$.


Choosing a residual error model

The choice of a residual error model $g$ is very flexible, and allows us to account for many different hypotheses we may have on the error's distribution. Let $f_j=f(t_j;\psi)$. Here are some simple error models.


    • Constant error model: $g=a$. That is, $y_j=f_j+a\teps_j$.
    • Proportional error model: $g=b\,f$. That is, $y_j=f_j+bf_j\teps_j$. This is for when we think the magnitude of the error is proportional to the value of the predicted value $f$.
    • Combined error model: $g=a+b f$. Here, $y_j=f_j+(a+bf_j)\teps_j$.
    • Alternative combined error model: $g^2=a^2+b^2f^2$. Here, $y_j=f_j+\sqrt{a^2+b^2f_j^2}\teps_j$.
    • Exponential error model: here, the model is instead $\log(y_j)=\log(f_j) + a\teps_j$, that is, $g=a$. It is exponential in the sense that if we exponentiate, we end up with $y_j = f_j e^{a\teps_j}$.



Tasks

To model a vector of observations $y = (y_j,\, 1\leq j \leq n$) we must perform several tasks:

    • Select a structural model $f$ and a residual error model $g$.
    • Estimate the model's parameters $\psi$.
    • Assess and validate the selected model.



Selecting structural and residual error models

As we are interested in parametric modeling, we must choose parametric structural and residual error models. In the absence of biological (or other) information, we might suggest possible structural models just by looking at the graphs of time-evolution of the data. For example, if $y_j$ is increasing with time, we might suggest an affine, quadratic or logarithmic model, depending on the approximate trend of the data. If $y_j$ is instead decreasing ever slower to zero, an exponential model might be appropriate.

However, often we have biological (or other) information to help us make our choice. For instance, if we have a system of differential equations describing how the drug is eliminated from the body, its solution may provide the formula (i.e., structural model) we are looking for.

As for the residual error model, if it is not immediately obvious which one to choose, several can be tested in conjunction with one or several possible structural models. After parameter estimation, each structural and residual error model pair can be assessed, compared against the others, and/or validated in various ways.

Now we can have a first look at parameter estimation, and further on, model assessment and validation.



Parameter estimation

Given the observed data and the choice of a parametric model to describe it, our goal becomes to find the "best" parameters for the model. A traditional framework to solve this kind of problem is called maximum likelihood estimation or MLE, in which the "most likely" parameters are found, given the data that was observed.

The likelihood $L$ is a function defined as:

\( L(\psi ; y_1,y_2,\ldots,y_n) \ \ \eqdef \ \ \py( y_1,y_2,\ldots,y_n; \psi) , \)

i.e., the conditional joint density function of $(y_j)$ given the parameters $\psi$, but looked at as if the data are known and the parameters not. The $\hat{\psi}$ which maximizes $L$ is known as the maximum likelihood estimator.

Suppose that we have chosen a structural model $f$ and residual error model $g$. If we assume for instance that $ \teps_j \sim_{i.i.d} {\cal N}(0,1)$, then the $y_j$ are independent of each other and (1) means that:

\( y_{j} \sim {\cal N}\left(f(t_j ; \psi) , g(t_j ; \psi)^2\right), \quad \quad 1\leq j \leq n .\)

Due to this independence, the pdf of $y = (y_1, y_2, \ldots, y_n)$ is the product of the pdfs of each $y_j$:

\(\begin{eqnarray} \py(y_1, y_2, \ldots y_n ; \psi) &=& \prod_{j=1}^n \pyj(y_j ; \psi) \\ \\ & = & \frac{1}{\prod_{j=1}^n \sqrt{2\pi} g(t_j ; \psi)} \ {\rm exp}\left\{-\frac{1}{2} \sum_{j=1}^n \left( \displaystyle{ \frac{y_j - f(t_j ; \psi)}{g(t_j ; \psi)} }\right)^2\right\} . \end{eqnarray}\)

This is the same thing as the likelihood function $L$ when seen as a function of $\psi$. Maximizing $L$ is equivalent to minimizing the deviance, i.e., -2 $\times$ the $\log$-likelihood ($LL$):

\(\begin{eqnarray} \hat{\psi} &=& \argmin{\psi} \left\{ -2 \,LL \right\}\\ &=& \argmin{\psi} \left\{ \sum_{j=1}^n \log\left(g(t_j ; \psi)^2\right) + \sum_{j=1}^n \left(\displaystyle{ \frac{y_j - f(t_j ; \psi)}{g(t_j ; \psi)} }\right)^2 \right\} . \end{eqnarray}\)
(2)


This minimization problem does not usually have an analytical solution for nonlinear models, so an optimization procedure needs to be used. However, for a few specific models, analytical solutions do exist.

For instance, suppose we have a constant error model: $y_{j} = f(t_j ; \psi) + a \, \teps_j,\,\, 1\leq j \leq n,$ that is: $g(t_j;\psi) = a$. In practice, $f$ is not itself a function of $a$, so we can write $\psi = (\phi,a)$ and therefore: $y_{j} = f(t_j ; \phi) + a \, \teps_j.$ Thus, (2) simplifies to:

\( (\hat{\phi},\hat{a}) \ \ = \ \ \argmin{(\phi,a)} \left\{ n \log(a^2) + \sum_{j=1}^n \left(\displaystyle{ \frac{y_j - f(t_j ; \phi)}{a} }\right)^2 \right\} . \)

The solution is then:

\(\begin{eqnarray} \hat{\phi} &=& \argmin{\phi} \sum_{j=1}^n \left( y_j - f(t_j ; \phi)\right)^2 \\ \hat{a}^2&=& \frac{1}{n}\sum_{j=1}^n \left( y_j - f(t_j ; \hat{\phi})\right)^2 , \end{eqnarray} \)

where $\hat{a}^2$ is found by setting the partial derivative of $-2LL$ to zero.

Whether this has an analytical solution or not depends on the form of $f$. For example, if $f(t_j;\phi)$ is just a linear function of the components of the vector $\phi$, we can represent it as a matrix $F$ whose $j$th row gives the coefficients at time $t_j$. Therefore, we have the matrix equation $y = F \phi + a \teps$.

The solution for $\hat{\phi}$ is thus the least-squares one, and for $\hat{a}^2$ it is the same as before:

\(\begin{eqnarray} \hat{\phi} &=& (F^\prime F)^{-1} F^\prime y \\ \hat{a}^2&=& \frac{1}{n}\sum_{j=1}^n \left( y_j - F_j \hat{\phi}\right)^2 . \\ \end{eqnarray}\)



Computing the Fisher information matrix

The Fisher information is a way of measuring the amount of information that an observable random variable carries about an unknown parameter upon which its probability distribution depends.

Let $\psis $ be the true unknown value of $\psi$, and let $\hatpsi$ be the maximum likelihood estimate of $\psi$. If the observed likelihood function is sufficiently smooth, asymptotic theory for maximum-likelihood estimation holds and

\( I_n(\psis)^{\frac{1}{2} }(\hatpsi-\psis) \limite{n\to \infty}{} {\mathcal N}(0,\id) , \)
(3)

where $I_n(\psis)$ is (minus) the Hessian (i.e., the matrix of the second derivatives) of the log-likelihood:

\(I_n(\psis)=- \displaystyle{ \frac{\partial^2}{\partial \psi \partial \psi^\prime} } LL(\psis;y_1,y_2,\ldots,y_n) \)

is the observed Fisher information matrix. Here, "observed" means that it is a function of observed variables $y_1,y_2,\ldots,y_n$.

Thus, an estimate of the covariance of $\hatpsi$ is the inverse of the observed Fisher information matrix as expressed by the formula:

\(C(\hatpsi) = - I_n(\hatpsi)^{-1} . \)



Deriving confidence intervals for parameters

Let $\psi_k$ be the $k$th of $d$ components of $\psi$. Imagine that we have estimated $\psi_k$ with $\hatpsi_k$, the $k$th component of the MLE $\hatpsi$, that is, a random variable that converges to $\psi_k^{\star}$ when $n \to \infty$ under very general conditions.

An estimator of its variance is the $k$th element of the diagonal of the covariance matrix $C(\hatpsi)$:

\(\widehat{\rm Var}(\hatpsi_k) = C_{kk}(\hatpsi) .\)

We can thus derive an estimator of its standard error:

\(\widehat{\rm s.e.}(\hatpsi_k) = \sqrt{C_{kk}(\hatpsi)} ,\)

and a confidence interval of level $1-\alpha$ for $\psi_k^\star$:

\({\rm CI}(\psi_k^\star) = \left[\hatpsi_k + \widehat{\rm s.e.}(\hatpsi_k)\,q\left(\frac{\alpha}{2}\right), \ \hatpsi_k + \widehat{\rm s.e.}(\hatpsi_k)\,q\left(1-\frac{\alpha}{2}\right)\right] , \)

where $q(w)$ is the quantile of order $w$ of a ${\cal N}(0,1)$ distribution.


Remarks

Approximating the fraction $\hatpsi/\widehat{\rm s.e}(\hatpsi_k)$ by the normal distribution is a "good" approximation only when the number of observations $n$ is large. A better approximation should be used for small $n$. In the model $y_j = f(t_j ; \phi) + a\teps_j$, the distribution of $\hat{a}^2$ can be approximated by a chi-squared distribution with $(n-d_\phi)$ degrees of freedom, where $d_\phi$ is the dimension of $\phi$. The quantiles of the normal distribution can then be replaced by those of a Student's $t$-distribution with $(n-d_\phi)$ degrees of freedom.



Deriving confidence intervals for predictions

The structural model $f$ can be predicted for any $t$ using the estimated value $f(t; \hatphi)$. For that $t$, we can then derive a confidence interval for $f(t,\phi)$ using the estimated variance of $\hatphi$. Indeed, as a first approximation we have:


\( f(t ; \hatphi) \simeq f(t ; \phis) + \nabla f (t,\phis) (\hatphi - \phis) ,\)

where $\nabla f(t,\phis)$ is the gradient of $f$ at $\phis$, i.e., the vector of the first-order partial derivatives of $f$ with respect to the components of $\phi$, evaluated at $\phis$. Of course, we do not actually know $\phis$, but we can estimate $\nabla f(t,\phis)$ with $\nabla f(t,\hatphi)$. The variance of $f(t ; \hatphi)$ can then be estimated by

\( \widehat{\rm Var}\left(f(t ; \hatphi)\right) \simeq \nabla f (t,\hatphi)\widehat{\rm Var}(\hatphi) \left(\nabla f (t,\hatphi) \right)^\prime . \)

We can then derive an estimate of the standard error of $f (t,\hatphi)$ for any $t$:

\(\widehat{\rm s.e.}(f(t ; \hatphi)) = \sqrt{\widehat{\rm Var}\left(f(t ; \hatphi)\right)} , \)

and a confidence interval of level $1-\alpha$ for $f(t ; \phi^\star)$:

\({\rm CI}(f(t ; \phi^\star)) = \left[f(t ; \hatphi) + \widehat{\rm s.e.}(f(t ; \hatphi))\,q\left(\frac{\alpha}{2}\right), \ f(t ; \hatphi) + \widehat{\rm s.e.}(f(t ; \hatphi))\,q\left(1-\frac{\alpha}{2}\right)\right].\)



Estimating confidence intervals using Monte Carlo simulation

The use of Monte Carlo methods to estimate a distribution does not require any approximation of the model.

We proceed in the following way. Suppose we have found a MLE $\hatpsi$ of $\psi$. We then simulate a data vector $y^{(1)}$ by first randomly generating the vector $\teps^{(1)}$ and then calculating for $1 \leq j \leq n$,

\( y^{(1)}_j = f(t_j ;\hatpsi) + g(t_j ;\hatpsi)\teps^{(1)}_j . \)

In a sense, this gives us an example of "new" data from the "same" model. We can then compute a new MLE $\hat{\psi}^{(1)}$ of $\psi$ using $y^{(1)}$.

Repeating this process $M$ times gives $M$ estimates of $\psi$ from which we can obtain an empirical estimation of the distribution of $\hatpsi$, or any quantile we like.

Any confidence interval for $\psi_k$ (resp. $f(t,\psi_k)$) can then be approximated by a prediction interval for $\hatpsi_k$ (resp. $f(t,\hatpsi_k)$). For instance, a two-sided confidence interval of level $1-\alpha$ for $\psi_k^\star$ can be estimated by the prediction interval

\( [\hat{\psi}_{k,([\frac{\alpha}{2} M])} \ , \ \hat{\psi}_{k,([ (1-\frac{\alpha}{2})M])} ], \)

where $[\cdot]$ denotes the integer part and $(\psi_{k,(m)},\ 1 \leq m \leq M)$ the order statistic, i.e., the parameters $(\hatpsi_k^{(m)}, 1 \leq m \leq M)$ reordered so that $\hatpsi_{k,(1)} \leq \hatpsi_{k,(2)} \leq \ldots \leq \hatpsi_{k,(M)}$.




A PK example

In the real world, it is often not enough to look at the data, choose one possible model and estimate the parameters. The chosen structural model may or may not be "good" at representing the data. It may be good but the chosen residual error model bad, meaning that the overall model is poor, and so on. That is why in practice we may want to try out several structural and residual error models. After performing parameter estimation for each model, various assessment tasks can then be performed in order to conclude which model is best.



The data

This modeling process is illustrated in detail in the following PK example. Let us consider a dose D=50mg of a drug administered orally to a patient at time $t=0$. The concentration of the drug in the bloodstream is then measured at times $(t_j) = (0.5, 1,\,1.5,\,2,\,3,\,4,\,8,\,10,\,12,\,16,\,20,\,24).$ Here is the file individualFitting_data.txt with the data:


Time Concentration
0.5 0.94
1.0 1.30
1.5 1.64
2.0 3.38
3.0 3.72
4.0 3.29
8.0 1.31
10.0 0.80
12.0 0.39
16.0 0.31
20.0 0.10
24.0 0.09


We are going to perform the analyses for this example with the free statistical software R. First, we import the data and plot it to have a look:

NewIndividual1.png

Rstudio.png
R

pk1=read.table("individualFitting_data.txt",header=T) 
t=pk1$time  
y=pk1$concentration
plot(t, y, xlab="time(hour)",
     ylab="concentration(mg/l)", col="blue")   



Fitting two PK models

We are going to consider two possible structural models that may describe the observed time-course of the concentration:


    \(\begin{eqnarray} \phi_1 &=& (k_a, V, k_e) \\ f_1(t ; \phi_1) &=& \frac{D\, k_a}{V(k_a-k_e)} \left( e^{-k_e \, t} - e^{-k_a \, t} \right). \end{eqnarray}\)


    • A one compartment model with zero-order absorption and linear elimination:

    \(\begin{eqnarray} \phi_2 &=& (T_{k0}, V, k_e) \\ f_2(t ; \phi_2) &=& \left\{ \begin{array}{ll} \displaystyle{ \frac{D}{V \,T_{k0} \, k_e} }\left( 1- e^{-k_e \, t} \right) & {\rm if }\ t\leq T_{k0} \\ \displaystyle{ \frac{D}{V \,T_{k0} \, k_e} } \left( 1- e^{-k_e \, T_{k0} } \right)e^{-k_e \, (t- T_{k0})} & {\rm otherwise} . \end{array} \right. \end{eqnarray}\)


We define each of these functions in R:


Rstudio.png
R


predc1=function(t,x){
  f=50*x[1]/x[2]/(x[1]-x[3])*(exp(-x[3]*t)-exp(-x[1]*t))
return(f)}

predc2=function(t,x){
  f=50/x[1]/x[2]/x[3]*(1-exp(-x[3]*t))
  f[t>x[1]]=50/x[1]/x[2]/x[3]*(1-exp(-x[3]*x[1]))*exp(-x[3]*(t[t>x[1]]-x[1]))
return(f)} 


We then define two models ${\cal M}_1$ and ${\cal M}_2$ that assume (for now) constant residual error models:

\(\begin{eqnarray} {\cal M}_1 : \quad y_j & = & f_1(t_j ; \phi_1) + a_1\teps_j \\ {\cal M}_2 : \quad y_j & = & f_2(t_j ; \phi_2) + a_2\teps_j . \end{eqnarray}\)

We can fit these two models to our data by computing the MLE $\hatpsi_1=(\hatphi_1,\hat{a}_1)$ and $\hatpsi_2=(\hatphi_2,\hat{a}_2)$ of $\psi$ under each model:

Rstudio.png
R

fmin1=function(x,y,t){
  f=predc1(t,x)
  g=x[4]
  e=sum( ((y-f)/g)^2 + log(g^2))
return(e)}

fmin2=function(x,y,t){
  f=predc2(t,x)
  g=x[4]
  e=sum( ((y-f)/g)^2 + log(g^2))
return(e)}

#--------- MLE --------------------------------

pk.nlm1=nlm(fmin1, c(0.3,6,0.2,1), y, t, hessian="true")
psi1=pk.nlm1$estimate

pk.nlm2=nlm(fmin2, c(3,10,0.2,4), y, t, hessian="true")
psi2=pk.nlm2$estimate
Here are the parameter estimation results:


> cat(" psi1 =",psi1,"\n\n")
 psi1 = 0.3240916 6.001204 0.3239337 0.4366948

> cat(" psi2 =",psi2,"\n\n")
 psi2 = 3.203111 8.999746 0.229977 0.2555242



Assessing and selecting the PK model

The estimated parameters $\hatphi_1$ and $\hatphi_2$ can then be used for computing the predicted concentrations $\hat{f}_1(t)$ and $\hat{f}_2(t)$ under both models at any time $t$. These curves can then be plotted over the original data and compared:

New Individual2.png

Rstudio.png
R

tc=seq(from=0,to=25,by=0.1)
phi1=psi1[c(1,2,3)]
fc1=predc1(tc,phi1)
phi2=psi2[c(1,2,3)]
fc2=predc2(tc,phi2)

plot(t,y,ylim=c(0,4.1),xlab="time (hour)", 
          ylab="concentration (mg/l)",col = "blue")
lines(tc,fc1, type = "l", col = "green", lwd=2)
lines(tc,fc2, type = "l", col = "red", lwd=2)
abline(a=0,b=0,lty=2)
legend(13,4,c("observations","first order absorption", 
          "zero order absorption"),
lty=c(-1,1,1), pch=c(1,-1,-1), lwd=2, col=c("blue","green","red"))

We clearly see that a much better fit is obtained with model ${\cal M}_2$, i.e., the one assuming a zero-order absorption process.

Another useful goodness-of-fit plot is obtained by displaying the observations $(y_j)$ versus the predictions $\hat{y}_j=f(t_j ; \hatpsi)$ given by the models:

Individual3.png

Rstudio.png
R

f1=predc1(t,phi1)
f2=predc2(t,phi2)

par(mfrow= c(1,2))
plot(f1,y,xlim=c(0,4),ylim=c(0,4),main="model 1")
abline(a=0,b=1,lty=1)
plot(f2,y,xlim=c(0,4),ylim=c(0,4),main="model 2")
abline(a=0,b=1,lty=1)



Model selection

Again, ${\cal M}_2$ would seem to have a slight edge. This can be tested more analytically using the Bayesian Information Criteria (BIC):

Rstudio.png
R

deviance1=pk.nlm1$minimum + n*log(2*pi)
bic1=deviance1+log(n)*length(psi1)
deviance2=pk.nlm2$minimum + n*log(2*pi)
bic2=deviance2+log(n)*length(psi2)
> cat(" bic1 =",bic1,"\n\n")
 bic1 = 24.10972

> cat(" bic2 =",bic2,"\n\n")
 bic2 = 11.24769

A smaller BIC is better. Therefore, this also suggests that model ${\cal M}_2$ should be selected.



Fitting different error models

For the moment, we have only considered constant error models. However, the "observations vs predictions" figure hints that the amplitude of the residual errors may increase with the size of the predicted value. Let us therefore take a closer look at four different residual error models, each of which we will associate with the "best" structural model $f_2$:

${\cal M}_2$ Constant error model: $y_j=f_2(t_j;\phi_2)+a_2\teps_j$
${\cal M}_3$ Proportional error model: $y_j=f_2(t_j;\phi_3)+b_3f_2(t_j;\phi_3)\teps_j$
${\cal M}_4$ Combined error model: $y_j=f_2(t_j;\phi_4)+(a_4+b_4f_2(t_j;\phi_4))\teps_j$
${\cal M}_5$ Exponential error model: $\log(y_j)=\log(f_2(t_j;\phi_5)) + a_5\teps_j$.

The three new ones need to be entered into R:


Rstudio.png
R


fmin3=function(x,y,t){
  f=predc2(t,x)
  g=x[4]*f
  e=sum( ((y-f)/g)^2 + log(g^2))
return(e)}

fmin4=function(x,y,t){
  f=predc2(t,x)
  g=abs(x[4])+abs(x[5])*f
  e=sum( ((y-f)/g)^2 + log(g^2))
return(e)}

fmin5=function(x,y,t){
  f=predc2(t,x)
  g=x[4]
  e=sum( ((log(y)-log(f))/g)^2 + log(g^2))
return(e)}


We can now compute the MLE $\hatpsi_3=(\hatphi_3,\hat{b}_3)$, $\hatpsi_4=(\hatphi_4,\hat{a}_4,\hat{b}_4)$ and $\hatpsi_5=(\hatphi_5,\hat{a}_5)$ of $\psi$ under models ${\cal M}_3$, ${\cal M}_4$ and ${\cal M}_5$:

Rstudio.png
R

#----------------  MLE  -------------------

pk.nlm3=nlm(fmin3, c(phi2,0.1), y, t, 
       hessian="true")
psi3=pk.nlm3$estimate

pk.nlm4=nlm(fmin4, c(phi2,1,0.1), y, t,  
       hessian="true")
psi4=pk.nlm4$estimate
psi4[c(4,5)]=abs(psi4[c(4,5)])

pk.nlm5=nlm(fmin5, c(phi2,0.1), y, t,  
       hessian="true")
psi5=pk.nlm5$estimate  
> cat(" psi3 =",psi3,"\n\n")
 psi3 = 2.642409 11.44113 0.1838779 0.2189221

> cat(" psi4 =",psi4,"\n\n")
 psi4 = 2.890066 10.16836 0.2068221 0.02741416 0.1456332

> cat(" psi5 =",psi5,"\n\n")
 psi5 = 2.710984 11.2744 0.188901 0.2310001



Selecting the error model

As before, these curves can be plotted over the original data and compared:


New Individual4.png

Rstudio.png
R

phi3=psi3[c(1,2,3)]
fc3=predc2(tc,phi3)
phi4=psi4[c(1,2,3)]
fc4=predc2(tc,phi4)
phi5=psi5[c(1,2,3)]
fc5=predc2(tc,phi5)

par(mfrow= c(1,1))
plot(t,y,ylim=c(0,4.1),xlab="time (hour)",ylab="concentration (mg/l)",
        col = "blue")
lines(tc,fc2, type = "l", col = "red", lwd=2)
lines(tc,fc3, type = "l", col = "green", lwd=2)
lines(tc,fc4, type = "l", col = "cyan", lwd=2)
lines(tc,fc5, type = "l", col = "magenta", lwd=2)
abline(a=0,b=0,lty=2)
legend(13,4,c("observations","constant error model",
        "proportional error model","combined error model","exponential error model"),
 lty=c(-1,1,1,1,1), pch=c(1,-1,-1,-1,-1), lwd=2, 
        col=c("blue","red","green","cyan","magenta"))


As you can see, the three predicted concentrations obtained with models ${\cal M}_3$, ${\cal M}_4$ and ${\cal M}_5$ are quite similar. We now calculate the BIC for each:


Rstudio.png
R

deviance3=pk.nlm3$minimum + n*log(2*pi)
bic3=deviance3 + log(n)*length(psi3)
deviance4=pk.nlm4$minimum + n*log(2*pi)
bic4=deviance4 + log(n)*length(psi4)
deviance5=pk.nlm5$minimum + 2*sum(log(y)) + n*log(2*pi)
bic5=deviance5 + log(n)*length(psi5)
> cat(" bic3 =",bic3,"\n\n")
 bic3 = 3.443607

> cat(" bic4 =",bic4,"\n\n")
 bic4 = 3.475841

> cat(" bic5 =",bic5,"\n\n")
 bic5 = 4.108521

All of these BIC are lower than the constant residual error one. BIC selects the residual error model ${\cal M}_3$ with a proportional component.

There is not a large difference between these three error models, though the proportional and combined error models give the smallest and essentially identical BIC. We decide to use the combined error model ${\cal M}_4$ in the following (the same types of analysis could be done with the proportional error model).

A 90% confidence interval for $\psi_4$ can derived from the Hessian (i.e., the square matrix of second-order partial derivatives) of the objective function (i.e., -2 $\times \ LL$):


Rstudio.png
R

ialpha=0.9
df=n-length(phi4)
I4=pk.nlm4$hessian/2
H4=solve(I4)
s4=sqrt(diag(H4)*n/df)
delta4=s4*qt(0.5+ialpha/2, df)
ci4=matrix(c(psi4-delta4,psi4+delta4),ncol=2)
> ci4
            [,1]        [,2]
[1,]  2.22576690  3.55436561
[2,]  7.93442421 12.40228967
[3,]  0.16628224  0.24736196
[4,] -0.02444571  0.07927403
[5,]  0.04119983  0.25006660


We can also calculate a 90% confidence interval for $f_4(t)$ using the Central Limit Theorem (see (3)):


Rstudio.png
R


nlpredci=function(phi,f,H){
  dphi=length(phi)
  nf=length(f)
  H=H*n/(n-dphi)
  S=H[seq(1,dphi),seq(1,dphi)]
  G=matrix(nrow=nf,ncol=dphi)
  for (k in seq(1,dphi)) {
    dk=phi[k]*(1e-5)
    phid=phi
    phid[k]=phi[k] + dk
    fd=predc2(tc,phid)
    G[,k]=(f-fd)/dk
  }
  M=rowSums((G%*%S)*G)
  deltaf=sqrt(M)*qt(0.5+alpha/2,df)
return(deltaf)}

deltafc4=nlpredci(phi4,fc4,H4)


This can then be plotted:


NewIndividual6.png

Rstudio.png
R

plot(t,y,ylim=c(0,4.5), xlab="time (hour)", 
       ylab="concentration (mg/l)", col="blue")
lines(tc,fc4, type = "l",col = "red",lwd=2)
lines(tc, fc4-deltafc4, type = "l",
       col = "red" ,lwd=1, lty=3)
lines(tc,fc4+deltafc4,type = "l",
       col = "red", lwd=1, lty=3)
abline(a=0,b=0,lty=2)
legend(10.5,4.5,c("observed concentrations",
       "predicted concentration", 
       "CI for predicted concentration"),
       lty=c(-1,1,3),pch=c(1,-1,-1),lwd=c(2,2,1),
       col=c("blue","red","red"))

Alternatively, prediction intervals for $\hatpsi_4$, $\hat{f}_4(t;\hatpsi_4)$ and new observations for any time $t$ can be estimated by Monte Carlo simulation:


Rstudio.png
R


f=predc2(t,phi4)
a4=psi4[4]
b4=psi4[5]
g=a4+b4*f
dpsi=length(psi4)
nc=length(tc)
N=1000
qalpha=c(0.5 - alpha/2,0.5 + alpha/2)
PSI=matrix(nrow=N,ncol=dpsi)
FC=matrix(nrow=N,ncol=nc)
Y=matrix(nrow=N,ncol=nc)
for (k in seq(1,N)) {
   eps=rnorm(n)
   ys=f+g*eps
   pk.nlm=nlm(fmin4, psi4, ys, t)
   psie=pk.nlm$estimate
   psie[c(4,5)]=abs(psie[c(4,5)])
   PSI[k,]=psie
   fce=predc2(tc,psie[c(1,2,3)])
   FC[k,]=fce
   gce=a4+b4*fce
   Y[k,]=fce + gce*rnorm(1)
}

ci4s=matrix(nrow=dpsi,ncol=2)
for (k in seq(1,dpsi)){
   ci4s[k,]=quantile(PSI[,k],qalpha,names=FALSE)
}
m4s=colMeans(PSI)
sd4s=apply(PSI,2,sd)

cifc4s=matrix(nrow=nc,ncol=2)
for (k in seq(1,nc)){
   cifc4s[k,]=quantile(FC[,k],qalpha,names=FALSE)
}

ciy4s=matrix(nrow=nc,ncol=2)
for (k in seq(1,nc)){
   ciy4s[k,]=quantile(Y[,k],qalpha,names=FALSE)
}

par(mfrow= c(1,1))
plot(t,y,ylim=c(0,4.5),xlab="time (hour)",
       ylab="concentration (mg/l)",col = "blue")
lines(tc,fc4, type = "l", col = "red", lwd=2)
lines(tc,cifc4s[,1], type = "l", col = "red", lwd=1, lty=3)
lines(tc,cifc4s[,2], type = "l", col = "red", lwd=1, lty=3)
lines(tc,ciy4s[,1], type = "l", col = "green", lwd=1, lty=3)
lines(tc,ciy4s[,2], type = "l", col = "green", lwd=1, lty=3)
abline(a=0,b=0,lty=2)
legend(10.5,4.5,c("observed concentrations", "predicted concentration", 
       "CI for predicted concentration", "CI for observed concentrations"), 
       lty=c(-1,1,3,3), pch=c(1,-1,-1,-1), lwd=c(2,2,1,1), col=c("blue","red","red","green"))


NewIndividual7.png

> ci4s
             [,1]        [,2]
[1,] 2.350653e+00  3.53526320
[2,] 8.350764e+00 12.04910579
[3,] 1.818431e-01  0.24156832
[4,] 5.445459e-09  0.08819339
[5,] 1.563625e-02  0.19638889


The R code and input data used in this section can be downloaded here: https://wiki.inria.fr/wikis/popix/images/a/a1/R_IndividualFitting.rar.

Bibliography

Buonaccorsi, J.P. - Measurement Error: Models, Methods, and Applications

Taylor & Francis,2010
http://books.google.fr/books?id=QVtVmaCqLHMC
Bibtex
Author : Buonaccorsi, J.P.
Title : Measurement Error: Models, Methods, and Applications
In : -
Address :
Date : 2010

Carroll, R.J., Ruppert, D., Stefanski, L.A., Crainiceanu, C.M. - Measurement Error in Nonlinear Models: A Modern Perspective, Second Edition

Taylor & Francis,2010
http://books.google.fr/books?id=9kBx5CPZCqkC
Bibtex
Author : Carroll, R.J., Ruppert, D., Stefanski, L.A., Crainiceanu, C.M.
Title : Measurement Error in Nonlinear Models: A Modern Perspective, Second Edition
In : -
Address :
Date : 2010

Fitzmaurice, G.M., Laird, N.M., Ware, J.H. - Applied Longitudinal Analysis

Wiley,2004
http://books.google.fr/books?id=gCoTIFejMgYC
Bibtex
Author : Fitzmaurice, G.M., Laird, N.M., Ware, J.H.
Title : Applied Longitudinal Analysis
In : -
Address :
Date : 2004

Gallant, A.R. - Nonlinear Statistical Models

Wiley,2009
http://books.google.fr/books?id=imv-NMozseEC
Bibtex
Author : Gallant, A.R.
Title : Nonlinear Statistical Models
In : -
Address :
Date : 2009

Huet, S., Bouvier, A., Poursat, M.A., Jolivet, E. - Statistical tools for nonlinear regression: a practical guide with S-PLUS and R examples

Springer,2003
Bibtex
Author : Huet, S., Bouvier, A., Poursat, M.A., Jolivet, E.
Title : Statistical tools for nonlinear regression: a practical guide with S-PLUS and R examples
In : -
Address :
Date : 2003

Ritz, C., Streibig, J.C. - Nonlinear regression with R

Vol. 33, Springer New York,2008
Bibtex
Author : Ritz, C., Streibig, J.C.
Title : Nonlinear regression with R
In : -
Address :
Date : 2008

Ross, G.J.S. - Nonlinear estimation

Springer-Verlag,1990
http://books.google.fr/books?id=7LkyzdLMghIC
Bibtex
Author : Ross, G.J.S.
Title : Nonlinear estimation
In : -
Address :
Date : 1990

Seber, G.A.F., Wild, C.J. - Nonlinear Regression

Wiley,2003
http://books.google.fr/books?id=YBYlCpBNo\_cC
Bibtex
Author : Seber, G.A.F., Wild, C.J.
Title : Nonlinear Regression
In : -
Address :
Date : 2003

Serroyen, J., Molenberghs, G., Verbeke, G., Davidian, M. - Nonlinear models for longitudinal data

The American Statistician 63(4):378-388,2009
Bibtex
Author : Serroyen, J., Molenberghs, G., Verbeke, G., Davidian, M.
Title : Nonlinear models for longitudinal data
In : The American Statistician -
Address :
Date : 2009

Wolberg, J.R. - Data analysis using the method of least squares: extracting the most information from experiments

Vol. 1, Springer Berlin, Germany,2006
Bibtex
Author : Wolberg, J.R.
Title : Data analysis using the method of least squares: extracting the most information from experiments
In : -
Address :
Date : 2006


Back.png
Forward.png