Difference between revisions of "Model evaluation"

From Popix
Jump to navigation Jump to search
(Page créée avec « == Introduction == Defining the expression "model evaluation" is a harder task than it may appear at first glance. Intuitively, it would seem to suggest evaluating the pe... »)
 
m
 
(44 intermediate revisions by 4 users not shown)
Line 12: Line 12:
 
What to do, then, when several possible models are retained? Well, we can try to select the "best" one (or best ones if no leader distinguishes itself from the rest). This means developing a model selection process which allows us to compare the models to each other. Compare models? But with what criteria?
 
What to do, then, when several possible models are retained? Well, we can try to select the "best" one (or best ones if no leader distinguishes itself from the rest). This means developing a model selection process which allows us to compare the models to each other. Compare models? But with what criteria?
  
In a purely explanatory context, Occam's razor is a useful parsimony principle which states that among competing hypotheses, the one with the fewest assumptions should be selected. In a modeling context, this means that among valid competing  models, the most simple one should be selected.
+
In a purely explanatory context, [http://en.wikipedia.org/wiki/Occam%27s_razor Occam's razor] is a useful parsimony principle which states that among competing hypotheses, the one with the fewest assumptions should be selected. In a modeling context, this means that among valid competing  models, the most simple one should be selected.
  
 
Model diagnostic tools are for the most part graphical or visual: we  "see" when something is not right between a chosen model and the data it is hypothesized to describe. Model selection tools, on the other hand, are analytical: we calculate the value of some criteria that allows us to compare the models with each other. However, it is absolutely critical to keep in mind the limits of these tools. These are not decision-making tools. It is not a $p$-value or some information criteria that can automatically decide which model to choose. It is always the modeler who must have the last word! This person uses the model diagnostic and selection tools in order to guide their decision, but at the end, it is they that must make the final decision. There is nothing more dangerous that rules applied without thinking and arbitrary cut-off values used blindly without reflection.
 
Model diagnostic tools are for the most part graphical or visual: we  "see" when something is not right between a chosen model and the data it is hypothesized to describe. Model selection tools, on the other hand, are analytical: we calculate the value of some criteria that allows us to compare the models with each other. However, it is absolutely critical to keep in mind the limits of these tools. These are not decision-making tools. It is not a $p$-value or some information criteria that can automatically decide which model to choose. It is always the modeler who must have the last word! This person uses the model diagnostic and selection tools in order to guide their decision, but at the end, it is they that must make the final decision. There is nothing more dangerous that rules applied without thinking and arbitrary cut-off values used blindly without reflection.
Line 18: Line 18:
 
Here, we will not look model selection techniques based on the various models' predictive performances. One such approach consists of splitting the data into three sets: a ''learning set'' is used for fitting the model, a ''validation set''  for choosing between models and a ''test set'' to assess the quality of the predictions made by the chosen model.
 
Here, we will not look model selection techniques based on the various models' predictive performances. One such approach consists of splitting the data into three sets: a ''learning set'' is used for fitting the model, a ''validation set''  for choosing between models and a ''test set'' to assess the quality of the predictions made by the chosen model.
  
Very few model diagnostic and selection tools exist for mixed-effects models. One of the most complete is [[http://xpose.sourceforge.net|Xpose]], an R-based model-building aid for population analysis using NONMEM  that facilitates model diagnostics, candidate covariate identification and model comparison. Here we will use $\monolix$ and illustrate these technique with the  example used previously for model exploration and parameter estimation.
+
Very few model diagnostic and selection tools exist for mixed-effects models. One of the most complete is [http://xpose.sourceforge.net Xpose], an R-based model-building aid for population analysis using [http://www.iconplc.com/technology/products/nonmem/ NONMEM]   that facilitates model diagnostics, candidate covariate identification and model comparison. Here we will use $\monolix$ and illustrate these techniques with the  example used previously for model exploration and parameter estimation.
  
 
<br>
 
<br>
 +
 
== Model diagnostics==
 
== Model diagnostics==
  
Line 40: Line 41:
 
{{Remarks
 
{{Remarks
 
|title=Remark
 
|title=Remark
|text=Running a statistical test only makes sense when we have doubts on hypothesis of retaining a model.
+
|text=Running a statistical test only makes sense when we have doubts on the hypothesis of retaining a model.
If it is clear from the beginning that for example the structural model is totally misspecified (e.g., we use a linear function of time even though a curve is clearly visible in the data), any basic goodness-of-fit plot (individual fits, observation vs prediction, residuals, etc.) will detect this misspecification without any doubt and without the need to evaluate the probability of making a mistake.
+
If it is clear from the beginning that for example the structural model is totally misspecified (e.g., we use a linear function of time even though a curvature is clearly visible in the data), any basic goodness-of-fit plot (individual fits, observation vs prediction, residuals, etc.) will detect this misspecification without any doubt and without the need to evaluate the probability of making a mistake.
 
}}
 
}}
 +
  
 
To put into practice  such a statistical test, we usually construct a test statistic $T(\by)$ which is a function of the observations and for which we are able to calculate a distribution under the null hypothesis $H_0$.
 
To put into practice  such a statistical test, we usually construct a test statistic $T(\by)$ which is a function of the observations and for which we are able to calculate a distribution under the null hypothesis $H_0$.
Line 65: Line 67:
  
 
In the present context, the first point is clearly a problem. Due to the complexity of the models we are interested in, it is impossible to analytically calculate the distribution of a function of the observations, even for something as simple as the empirical mean $\overline{\by}=\sum_{i,j}y_{ij}/\sum_i{n_i}$.
 
In the present context, the first point is clearly a problem. Due to the complexity of the models we are interested in, it is impossible to analytically calculate the distribution of a function of the observations, even for something as simple as the empirical mean $\overline{\by}=\sum_{i,j}y_{ij}/\sum_i{n_i}$.
Using limit theorems to approximate such distributions is also more or less hopeless. Perhaps the most powerful, general and precise solution we have available to us is Monte Carlo simulation:
+
Using limit theorems to approximate such distributions is also more or less hopeless. Perhaps the most powerful, general and precise solution we have available to us is [http://en.wikipedia.org/wiki/Monte_Carlo_simulation Monte Carlo simulation]:
  
  
Line 90: Line 92:
  
 
<br>
 
<br>
 +
 
===Diagnostic plots using  individual parameters===
 
===Diagnostic plots using  individual parameters===
  
Diagnostic plots constructed using only the observations are useful for looking at the distribution $\py$ of the observations,
+
Diagnostic plots constructed using only the observations are useful for looking at the distribution $\qy$ of the observations,
 
but do not help with testing hypotheses made on the non-observed individual parameters (about the distribution, covariate model, etc.).
 
but do not help with testing hypotheses made on the non-observed individual parameters (about the distribution, covariate model, etc.).
  
Line 104: Line 107:
 
where $\hat{\eta}_i$ is an estimate of $\eta_i$ (the conditional mode, conditional mean, etc.).
 
where $\hat{\eta}_i$ is an estimate of $\eta_i$ (the conditional mode, conditional mean, etc.).
  
This shrinkage phenomenon is simple to understand because the conditional distribution $\qcetaiyi$ of $\eta_i$ is defined by the product $\pmacro(y_i|\eta_i)\pmacro(\eta_i)$. Saying that the observations $y_i$ provide little information about $\eta_i$ means that the conditional distribution of $y_i$ has a reduced importance in the construction of $\qcetaiyi$. The mode (resp. mean) of $\qcetaiyi$ will therefore be close to 0 which is both the mode and mean of $\qetai$. The result is a high level of shrinkage (close to 1) whenever $\var{\hat{\eta}_i}\ll\var{\eta_i}$.
+
This shrinkage phenomenon is simple to understand because the conditional distribution $\qcetaiyi$ of $\eta_i$ is defined by the product $\pmacro(y_i|\eta_i)\pmacro(\eta_i)$. Saying that the observations $y_i$ provides little information about $\eta_i$ means that the conditional distribution of $y_i$ has a reduced importance in the construction of $\qcetaiyi$. The mode (resp. mean) of $\qcetaiyi$ will therefore be close to 0 which is both the mode and mean of $\qetai$. The result is a high level of shrinkage (close to 1) whenever $\var{\hat{\eta}_i}\ll\var{\eta_i}$.
  
 
Estimates of the $\psi_i$ are therefore biased because they do not correctly reflect the marginal distribution $\qpsii$ (in particular, their variance is much less). A particularly effective solution is to simulate the individual parameters $\psi_i$ with the conditional distribution $\qcpsiiyi$ rather than taking the mode. The resulting estimator is unbiased in the following sense:
 
Estimates of the $\psi_i$ are therefore biased because they do not correctly reflect the marginal distribution $\qpsii$ (in particular, their variance is much less). A particularly effective solution is to simulate the individual parameters $\psi_i$ with the conditional distribution $\qcpsiiyi$ rather than taking the mode. The resulting estimator is unbiased in the following sense:
Line 116: Line 119:
 
This relationship is a fundamental one when considering inverse problems, incomplete data models, mixed-effects models, etc. So what does it imply exactly? Well, if we randomly draw a vector $y_i$ of observations for an individual in a population and then generate a vector $\psi_i$ using the conditional distribution $\qcpsiiyi$, then the distribution of $\psi_i$ is the population distribution $\qpsii$. In other words, even if each $\psi_i$ is simulated using its own conditional distribution, the fact of pooling them allows us to look at them as if they were a sample from $\qpsii$, i.e., the marginal distribution $\qpsii$ is a mixture of conditional distributions $\qcpsiiyi$.
 
This relationship is a fundamental one when considering inverse problems, incomplete data models, mixed-effects models, etc. So what does it imply exactly? Well, if we randomly draw a vector $y_i$ of observations for an individual in a population and then generate a vector $\psi_i$ using the conditional distribution $\qcpsiiyi$, then the distribution of $\psi_i$ is the population distribution $\qpsii$. In other words, even if each $\psi_i$ is simulated using its own conditional distribution, the fact of pooling them allows us to look at them as if they were a sample from $\qpsii$, i.e., the marginal distribution $\qpsii$ is a mixture of conditional distributions $\qcpsiiyi$.
  
The procedure is therefore as follows: we generate several values from each conditional distribution $\qcpsiiyi$ using the Metropolis-Hastings algorithm described in Section~\ref{sec:mh}, and use them in addition to the observations in order to build various diagnostic plots.
+
The procedure is therefore as follows: we generate several values from each conditional distribution $\qcpsiiyi$ using the [[The Metropolis-Hastings algorithm for simulating the individual parameters|Metropolis-Hastings algorithm]], and use them in addition to the observations in order to build various diagnostic plots.
  
  
 
<br>
 
<br>
==== An example ===
 
  
We are going to use the same model used for model exploration in the [[Visualization]] Section and parameter estimation in the [[Estimation#Maximum likelihood estimation of the population parameters | Maximum likelihood estimation of the population parameters]] Chapter.
+
=== An example ===
 +
 
 +
We are going to use the same model used for model exploration in the [[Visualization]] section and parameter estimation in the [[Estimation#Maximum likelihood estimation of the population parameters | Maximum likelihood estimation of the population parameters]] chapter.
  
  
Line 133: Line 137:
 
\end{eqnarray}</math> }}
 
\end{eqnarray}</math> }}
  
The statistical model assumes that $ka_i$ and $V_i$ are log-normally distributed, $Cl_i$ normal, $h0_i$  is probit-normal and $\gamma$ logit-normal. No covariates are used in this model. Lastly, we suppose a constant residual error model. Now we are going to review several different diagnostic plots and look at the conclusions that can be made using them.
+
The statistical model assumes that $ka_i$ and $V_i$ are log-normally distributed, $Cl_i$ normal, $h0_i$  probit-normal and $\gamma$ logit-normal. No covariates are used in this model. Lastly, we suppose a constant residual error model. Now we are going to review several different diagnostic plots and look at the conclusions that can be made using them.
 +
 
  
 
1) <u>Individual fits.</u>
 
1) <u>Individual fits.</u>
 +
 
In the continuous data model $y_{ij}=f(t_{ij};\psi_i) + f(t_{ij};\psi_i)\teps_{ij}$,  estimation of the population parameters $\psi_{\rm pop}$ and individual parameters $\psi_{i}$ allows us to compute for each individual:
 
In the continuous data model $y_{ij}=f(t_{ij};\psi_i) + f(t_{ij};\psi_i)\teps_{ij}$,  estimation of the population parameters $\psi_{\rm pop}$ and individual parameters $\psi_{i}$ allows us to compute for each individual:
 
  
 
<ul>
 
<ul>
Line 148: Line 153:
  
 
{{ExampleWithImage
 
{{ExampleWithImage
|text=The figure plotting for each individual the two curves for the predicted concentration shows evidence of inter-individual variability in the kinetics, and furthermore does not allow us to reject the proposed PK model since the fits seem correct.
+
|text=The figure plotting for each individual the two curves for the predicted concentration shows evidence of inter-individual variability in the kinetics, and furthermore does not allow us to reject the proposed PK model since the fits seem acceptable.
 
|image=indfits1.png
 
|image=indfits1.png
 
}}
 
}}
Line 160: Line 165:
  
 
{{ExampleWithImage
 
{{ExampleWithImage
|text= The figure showing observations vs predictions reveals no obvious misspecification. In particular it does not allow us to reject the constant residual error model.
+
|text= The figure showing observations vs individuals reveals no obvious misspecification. In particular it does not allow us to reject the constant residual error model.
 
|image=obspred1.png
 
|image=obspred1.png
 
}}
 
}}
Line 167: Line 172:
 
3) <u>Residuals.</u>
 
3) <u>Residuals.</u>
  
Several types of residuals can be defined: ''population weighted residuals'' $({\rm PWRES}_{ij})$, ''individual weighted residuals'' $({\rm IWRES}_{ij})$, {\it  normalised prediction distribution errors } $({\rm NPDE}_{ij})$, etc. The first two are defined by:
+
Several types of residuals can be defined: ''population weighted residuals'' $({\rm PWRES}_{ij})$, ''individual weighted residuals'' $({\rm IWRES}_{ij})$, ''normalised prediction distribution errors'' $({\rm NPDE}_{ij})$, etc. The first two are defined by:
  
 
{{Equation1
 
{{Equation1
 
|equation=<math>\begin{eqnarray}
 
|equation=<math>\begin{eqnarray}
{\rm PWRES}_{ij} &=& \frac{y_{ij} - \hat{\mathbb{E} }(y_{ij})}{\hat{\rm std}(y_{ij})} \\
+
{\rm PWRES}_{ij} &=& \frac{y_{ij} - \hat{\mathbb{E} }(y_{ij})}{\hat{\rm std}(y_{ij})} \\ {\rm IWRES}_{ij} &=& \frac{y_{ij} - f(t_{ij} ; \hat{\psi}_{i})}{g(t_{ij} ; \hat{\psi}_{i})},
{\rm IWRES}_{ij} &=& \frac{y_{ij} - f(t_{ij} ; \hat{\psi}_{i})}{g(t_{ij} ; \hat{\psi}_{i})},
 
 
\end{eqnarray}</math> }}
 
\end{eqnarray}</math> }}
  
 
where $\hat{\mathbb{E}}(y_{ij})$ and $\hat{\rm std}(y_{ij})$ are the  mean and variance of $y_{ij}$ estimated by Monte Carlo.
 
where $\hat{\mathbb{E}}(y_{ij})$ and $\hat{\rm std}(y_{ij})$ are the  mean and variance of $y_{ij}$ estimated by Monte Carlo.
${\rm NPDE}_{ij}$ is a non parametric version of ${\rm PWRES}_{ij}$, based on a rank statistic. See [[http://www.npde.biostat.fr | npde]] for more details.
+
${\rm NPDE}_{ij}$ is a nonparametric version of ${\rm PWRES}_{ij}$, based on a rank statistic. See [http://www.npde.biostat.fr npde] for more details.
  
 
Statistics useful for summarizing the residuals are the  10%, 50% (median) and 90% quantiles. The procedure described earlier for estimating prediction intervals of these quantiles using Monte Carlo can again be used.
 
Statistics useful for summarizing the residuals are the  10%, 50% (median) and 90% quantiles. The procedure described earlier for estimating prediction intervals of these quantiles using Monte Carlo can again be used.
Line 196: Line 200:
  
 
4) <u>The distributions of the individual parameters.</u>
 
4) <u>The distributions of the individual parameters.</u>
 +
 
The hypotheses we have made about the distributions of the individual parameters can be tested by visually comparing the pdf of the pre-selected distribution of each parameter with the empirical distribution of that parameter. We are going to see that using the estimated individual parameters does not allow us to construct a pertinent diagnostic plot, and that we must rather use parameters simulated with the conditional distribution  $\qcpsiiyi$ for each individual.
 
The hypotheses we have made about the distributions of the individual parameters can be tested by visually comparing the pdf of the pre-selected distribution of each parameter with the empirical distribution of that parameter. We are going to see that using the estimated individual parameters does not allow us to construct a pertinent diagnostic plot, and that we must rather use parameters simulated with the conditional distribution  $\qcpsiiyi$ for each individual.
  
  
 
{{ExampleWithImage
 
{{ExampleWithImage
|title=
 
 
|text= The plots show for each model parameter the pdf obtained from the estimated parameters and the empirical distribution, shown as a histogram, of the estimated individual parameters (here the estimated parameters are the modes of the conditional distributions).
 
|text= The plots show for each model parameter the pdf obtained from the estimated parameters and the empirical distribution, shown as a histogram, of the estimated individual parameters (here the estimated parameters are the modes of the conditional distributions).
 
|image=distparam0.png
 
|image=distparam0.png
Line 207: Line 211:
  
 
{{ExampleWithImage
 
{{ExampleWithImage
text= Instead of histograms, $\monolix$ can also display the empirical distribution of a continuous variable using a nonparametric estimation of the density. This is a better way to represent continuous distributions than a histogram.
+
|text= Instead of histograms, $\monolix$ can also display the empirical distribution of a continuous variable using nonparametric density estimation. This is a better way to represent continuous distributions than a histogram.
 
<br>
 
<br>
 
It is also possible to display the $\eta$-shrinkage for each parameter. As expected, $\eta$-shrinkage is large for the parameters associated with the time-to-events process. That does not mean that the statistical models for $h_0$ and $\gamma$ are misspecified, but that the data does not allow us to correctly recover these individual parameters.
 
It is also possible to display the $\eta$-shrinkage for each parameter. As expected, $\eta$-shrinkage is large for the parameters associated with the time-to-events process. That does not mean that the statistical models for $h_0$ and $\gamma$ are misspecified, but that the data does not allow us to correctly recover these individual parameters.
Line 221: Line 225:
 
5) <u>Covariate model.</u>
 
5) <u>Covariate model.</u>
  
The model assumes that for a given individual parameter $\psi_i$, there exists a function $h$ such that $h(\psi_i) = h(\psi_{\rm pop}) + \eta_i$, where the random effects are independent and identically distributed Gaussian variables. We can then graphically display the random effects simulated with the conditional distributions as a function of the various covariates in order to see whether this hypothesis is valid.
+
The model assumes that for a given individual parameter $\psi_i$, there exists a function $h$ such that $h(\psi_i) = h(\psi_{\rm pop}) + \eta_i$, where the random effects are i.i.d. Gaussian variables. We can then graphically display the random effects simulated with the conditional distributions as a function of the various covariates in order to see whether this hypothesis is valid.
  
  
 
{{ExampleWithImage
 
{{ExampleWithImage
 
|text= These plots clearly show that the simulated random effects for $V$ and $Cl$ are correlated with the weight and have different distributions depending on gender. The assumption that volume $V$ and clearance $Cl$ are independent of weight should be rejected. The statistical model also needs to take into account the fact that both predicted volume and clearance increase with weight.
 
|text= These plots clearly show that the simulated random effects for $V$ and $Cl$ are correlated with the weight and have different distributions depending on gender. The assumption that volume $V$ and clearance $Cl$ are independent of weight should be rejected. The statistical model also needs to take into account the fact that both predicted volume and clearance increase with weight.
|image=covariate2.png  
+
|image=Eval_covariate2.png  
 
}}
 
}}
  
Line 243: Line 247:
 
7) <u>Visual predictive checks.</u>
 
7) <u>Visual predictive checks.</u>
  
A VPC is a diagnostic tool well suited to continuous data. It allows us to summarize in the same figure the structural and statistical models. The VPC shown uses the order 10%, 50% and 90\% for the observations after having regrouped them into bins for successive intervals. Then, prediction intervals for these quantiles under ${\cal M}_0$ are estimated using Monte Carlo.
+
A VPC is a diagnostic tool well suited to continuous data. It allows us to summarize in the same figure the structural and statistical models. The VPC shown uses the order 10%, 50% and 90% for the observations after having regrouped them into bins for successive intervals. Then, prediction intervals for these quantiles under ${\cal M}_0$ are estimated using Monte Carlo.
  
  
Line 272: Line 276:
 
* a combined residual error model, e.g., $y=f+(a+b*f)\teps$
 
* a combined residual error model, e.g., $y=f+(a+b*f)\teps$
  
* a statistical model for $Cl$ and $V$ which incorporates weight as a covariate, assuming for instance a linear relationship between $\log(V)$ and $\log({\rm weight})$ and a linear relationship between $\log(Cl)$ and $\log({\rm weight})$,
+
* a statistical model for $Cl$ and $V$ which incorporates weight as a covariate, assuming for instance a linear relationship between $\log(V)$ and $\log({\rm weight})$ and a linear relationship between $\log(Cl)$ and $\log({\rm weight})$
  
 
*a linear correlation between $\log(Cl)$ and $\log(V)$.
 
*a linear correlation between $\log(Cl)$ and $\log(V)$.
Line 284: Line 288:
  
 
<br>
 
<br>
 +
 
== Model selection ==  
 
== Model selection ==  
  
Line 306: Line 311:
 
|title=Remarks
 
|title=Remarks
 
|text=  &#32;
 
|text=  &#32;
* Surprisingly, the formula for calculating the BIC  differs from one software to another. The reason is that the effective sample size is not clearly defined in the context of mixed-effects models. The question is whether we should use the number of subjects $N$ or the total number of observations $n_{\mathrm{tot} }$ in the penalty term. The penalty using $n_{\mathrm{tot} }$ is implemented in the R package [[http://cran.r-project.org/web/packages/nlme/nlme.pdf| nlme ]] and  the SPSS procedure [[http://www.spss.ch/upload/1126184451_Linear%20Mixed%20Effects%20Modeling%20in%20SPSS.pdf|MIXED]], while $N$  is used in $\monolix$, [[http://cran.r-project.org/web/packages/saemix/saemix.pdf|saemix]] and in SAS proc [[http://support.sas.com/documentation/cdl/en/statug/63033/HTML/default/viewer.htm#nlmixed_toc.htm| NLMIXE]] NLMIXED.
+
* Surprisingly, the formula for calculating the BIC  differs from one software to another. The reason is that the effective sample size is not clearly defined in the context of mixed-effects models. The question is whether we should use the number of subjects $N$ or the total number of observations $n_{\mathrm{tot} }$ in the penalty term. The penalty using $n_{\mathrm{tot} }$ is implemented in the R package nlme <!--[http://cran.r-project.org/web/packages/nlme/nlme.pdf nlme]--> and  the SPSS procedure MIXED <!--[http://www.spss.ch/upload/1126184451_Linear%20Mixed%20Effects%20Modeling%20in%20SPSS.pdf   MIXED]-->, while $N$  is used in saemix for $\monolix$, <!--[http://cran.r-project.org/web/packages/saemix/saemix.pdf saemix]--> and in SAS proc NLMIXED<!--[http://support.sas.com/documentation/cdl/en/statug/63033/HTML/default/viewer.htm#nlmixed_toc.htm NLMIXED]-->.
  
An appropriate decomposition of the complete log-likelihood combined with the Laplace approximation can be used to derive the asymptotic BIC approximation. That leads to an optimal BIC penalty based on two terms proportional to $\log N$ and $\log n_{\mathrm{tot} }$ that adapts to the mixed-effects structure of the model. This new approach is not implemented yet in any software.
+
: An appropriate decomposition of the complete log-likelihood combined with the Laplace approximation can be used to derive the asymptotic BIC approximation. This leads to an optimal BIC penalty based on two terms proportional to $\log N$ and $\log n_{\mathrm{tot} }$ that adapts to the mixed-effects structure of the model. This new approach is not implemented yet in any software.
  
  
Line 315: Line 320:
  
  
The observed log-likelihood ${\llike}(\theta;\by) = \log(\py(\by;\theta))$ cannot be computed in closed form for nonlinear mixed-effects models. It can be estimated by Monte Carlo using the importance sampling algorithm described in the [[Estimation of the log-likelihood]] Section.
+
The observed log-likelihood ${\llike}(\theta;\by) = \log(\py(\by;\theta))$ cannot be computed in closed form for nonlinear mixed-effects models. It can be estimated by Monte Carlo using the importance sampling algorithm described in the [[Estimation of the log-likelihood]] section.
  
 
When comparing two nested models ${\cal M}_0$ and ${\cal M}_1$ with dimensions $P_0$ and $P_1$ (with $P_1>P_0$), the ''likelihood ratio test'' (LRT) uses the test statistic
 
When comparing two nested models ${\cal M}_0$ and ${\cal M}_1$ with dimensions $P_0$ and $P_1$ (with $P_1>P_0$), the ''likelihood ratio test'' (LRT) uses the test statistic
Line 348: Line 353:
  
 
{{Equation1
 
{{Equation1
|equation=<math>LRT \limite{N\to \infty}{} \displaystyle{ \frac{1}{2} }\chi^2(1) + \displaystyle{ \frac{1}{2} }\delta_0 .
+
|equation=<math>
</math>
+
LRT \limite{N\to \infty}{} \displaystyle{ \frac{1}{2} }\chi^2(1) + \displaystyle{ \frac{1}{2} }\delta_0 .
 +
</math> }}
 +
</ul>
  
  
Line 367: Line 374:
  
 
<br>
 
<br>
====An example===
+
 
 +
===An example===
  
 
We are going to continue using our joint model for concentration data and hemorrhaging events. The structural model is chosen to be the one used earlier for the diagnostic tests. We would now like to compare several statistical models:
 
We are going to continue using our joint model for concentration data and hemorrhaging events. The structural model is chosen to be the one used earlier for the diagnostic tests. We would now like to compare several statistical models:
  
 +
<blockquote>
 
${\cal M}_1$: all the individual parameters are log-normally distributed, there is no correlation between individual parameters,  the residual error model is a combined one ($y=f+(a+b*f)*\teps$), both $\log(V)$ and $\log(Cl)$ are linear functions of $\log({\rm weight})$.   
 
${\cal M}_1$: all the individual parameters are log-normally distributed, there is no correlation between individual parameters,  the residual error model is a combined one ($y=f+(a+b*f)*\teps$), both $\log(V)$ and $\log(Cl)$ are linear functions of $\log({\rm weight})$.   
 
+
</blockquote>
 +
<blockquote>
 
${\cal M}_2$: model ${\cal M}_1$, assuming that $\log(V_i)$ and $\log(Cl_i)$ are linearly correlated.   
 
${\cal M}_2$: model ${\cal M}_1$, assuming that $\log(V_i)$ and $\log(Cl_i)$ are linearly correlated.   
 
+
</blockquote>
 +
<blockquote>
 
${\cal M}_3$: model ${\cal M}_2$, assuming that $\gamma_i=\gamma_{\rm pop}$ (i.e., $\omega_\gamma = 0$).  
 
${\cal M}_3$: model ${\cal M}_2$, assuming that $\gamma_i=\gamma_{\rm pop}$ (i.e., $\omega_\gamma = 0$).  
 +
</blockquote>
 +
<blockquote>
 +
${\cal M}_4$: model ${\cal M}_2$, assuming that $Cl$ is not a function of the weight. 
 +
</blockquote>
  
${\cal M}_4$: model ${\cal M}_2$, assuming that $Cl$ is not a function of the weight. 
 
  
 
The results for these four models are as follows:
 
The results for these four models are as follows:
  
  
{| class="wikitable" cellpadding="20" cellspacing="20" style="margin-left:20%; margin-right=20%; width:150%"
+
{| class="wikitable" cellpadding="20" cellspacing="20" style="margin-left:20%; margin-right=20%; width:50%"
 
! Model || $-2\times {\llike}$ || BIC  
 
! Model || $-2\times {\llike}$ || BIC  
 
|-   
 
|-   
Line 395: Line 409:
  
  
Clearly models ${\cal M}_2$ and ${\cal M}_3$ are the best in terms of BIC. We can therefore envisage selecting a model that includes both a correlation between $\log(V)$ and $\log(Cl)$ and that the volume $V$ is a function of the weight.
+
Clearly models ${\cal M}_2$ and ${\cal M}_3$ are the best in terms of BIC. We can therefore envisage selecting a model that includes both a correlation between $\log(V)$ and $\log(Cl)$ and that supposes that the volume $V$ is a function of the weight.
  
 
Rather than a purely statistical criteria (LRT or BIC) it is particularly the estimated value of the standard deviation of  $\gamma$ ($\hat{\omega}_\gamma = 0.002$) under the model ${\cal M}_2$ that would lead us to conclude that the inter-individual variability of $\gamma$ is negligible.
 
Rather than a purely statistical criteria (LRT or BIC) it is particularly the estimated value of the standard deviation of  $\gamma$ ($\hat{\omega}_\gamma = 0.002$) under the model ${\cal M}_2$ that would lead us to conclude that the inter-individual variability of $\gamma$ is negligible.
  
 
It is also important to try and evaluate the impact that a bad decision could have. Retaining $\hat{\omega}_\gamma = 0.002$ would have little impact on predictions because $\gamma_i$ is log-normally distributed, representing a variability of around 0.2%. Models ${\cal M}_2$ and ${\cal M}_3$ are thus practically identical and there is no particular advantage of selecting one over the other.
 
It is also important to try and evaluate the impact that a bad decision could have. Retaining $\hat{\omega}_\gamma = 0.002$ would have little impact on predictions because $\gamma_i$ is log-normally distributed, representing a variability of around 0.2%. Models ${\cal M}_2$ and ${\cal M}_3$ are thus practically identical and there is no particular advantage of selecting one over the other.
 +
 +
 +
<br>
 +
== Bibliography ==
 +
 +
 +
<bibtex>
 +
@article{akaike1983information,
 +
  title={Information measures and model selection},
 +
  author={Akaike, H.},
 +
  journal={Bulletin of the International Statistical Institute},
 +
  volume={50},
 +
  number={1},
 +
  pages={277-291},
 +
  year={1983}
 +
}
 +
</bibtex>
 +
<bibtex>
 +
@article{arlot2010survey,
 +
  title={A survey of cross-validation procedures for model selection},
 +
  author={Arlot, S. and Celisse, A.},
 +
  journal={Statistics Surveys},
 +
  volume={4},
 +
  pages={40 - 79},
 +
  year={2010},
 +
  publisher={The author, under a Creative Commons Attribution License}
 +
}
 +
</bibtex>
 +
<bibtex>
 +
@article{berger1996intrinsic,
 +
  title={The intrinsic Bayes factor for model selection and prediction},
 +
  author={Berger, J.O. and Pericchi, L.R.},
 +
  journal={Journal of the American Statistical Association},
 +
  volume={91},
 +
  number={433},
 +
  pages={109-122},
 +
  year={1996},
 +
  publisher={Taylor & Francis}
 +
}
 +
</bibtex>
 +
<bibtex>
 +
@article{bergstrand2011prediction,
 +
  title={Prediction-corrected visual predictive checks for diagnosing nonlinear mixed-effects models},
 +
  author={Bergstrand, M. and Hooker, A.C. and Wallin, J.E. and Karlsson, M.O.},
 +
  journal={The AAPS journal},
 +
  volume={13},
 +
  number={2},
 +
  pages={143-151},
 +
  year={2011},
 +
  publisher={Springer}
 +
}
 +
</bibtex>
 +
<bibtex>
 +
@book{burnham2002model,
 +
  title={Model selection and multi-model inference: a practical information-theoretic approach},
 +
  author={Burnham, K.P. and Anderson, D.R.},
 +
  year={2002},
 +
  publisher={Springer Verlag}
 +
}
 +
</bibtex>
 +
<bibtex>
 +
@article{comets2010model,
 +
  title={Model evaluation in nonlinear mixed effect models, with applications to pharmacokinetics},
 +
  author={Comets,E. and Brendel,K.},
 +
  journal={Journal de la Soci&eacute;t&eacute; Fran&ccedil;aise de Statistique},
 +
  volume={151},
 +
  number={1},
 +
  pages={106-128},
 +
  year={2010}
 +
}
 +
</bibtex>
 +
<bibtex>
 +
@article{comets2008computing,
 +
  title={Computing normalised prediction distribution errors to evaluate nonlinear mixed-effect models: the npde add-on package for R},
 +
  author={Comets, E. and Brendel, K. and Mentr&eacute;, F.},
 +
  journal={Computer methods and programs in biomedicine},
 +
  volume={90},
 +
  number={2},
 +
  pages={154-166},
 +
  year={2008},
 +
  publisher={Elsevier}
 +
}
 +
</bibtex>
 +
<bibtex>
 +
@article{hausman1984specification,
 +
  title={Specification tests for the multinomial logit model},
 +
  author={Hausman, J. and McFadden, D.},
 +
  journal={Econometrica: Journal of the Econometric Society},
 +
  pages={1219-1240},
 +
  year={1984},
 +
  publisher={JSTOR}
 +
}
 +
</bibtex>
 +
<bibtex>
 +
@ARTICLE{Hooker2009Xpose,
 +
author = {Hooker, A. and Karlsson, M.O. and Jonsson, E.N.},
 +
title = {Model diagnostic using XPOSE},
 +
url = {http://xpose.sourceforge.net/generic\_chm/xpose.VPC.html},
 +
}
 +
</bibtex>
 +
<bibtex>
 +
@INPROCEEDINGS{KarlssonHolford2008Tutorial,
 +
  author = {Karlsson, M.O. and Holford, N.},
 +
  title = {A Tutorial on Visual Predictive Checks},
 +
  booktitle = {PAGE 2008},
 +
  year = {2008},
 +
  owner = {kb},
 +
  timestamp = {2011.04.18},
 +
  url = {http://www.page-meeting.org/pdf\_assets/8694-Karlsson_Holford_VPC_Tutorial_hires.pdf}
 +
}
 +
</bibtex>
 +
<bibtex>
 +
@article{lavielle2011automatic,
 +
  title={Automatic data binning for improved visual diagnosis of pharmacometric models},
 +
  author={Lavielle, M. and Bleakley, K.},
 +
  journal={Journal of pharmacokinetics and pharmacodynamics},
 +
  volume={38},
 +
  number={6},
 +
  pages={861-871},
 +
  year={2011},
 +
  publisher={Springer}
 +
}
 +
</bibtex>
 +
<bibtex>
 +
@misc{massart2007concentration,
 +
  title={Concentration Inequalities and Model Selection, Ecole d’Et&eacute; de Probabilit&eacute;s de Saint-Flour XXXIII-2003 Lecture Notes in Mathematics 1896},
 +
  author={Massart, P.},
 +
  year={2007},
 +
  publisher={Springer-Verlag, Berlin}
 +
}
 +
</bibtex>
 +
<bibtex>
 +
@book{neyman1992problem,
 +
  title={On the problem of the most efficient tests of statistical hypotheses},
 +
  author={Neyman, J. and Pearson, E.S.},
 +
  year={1992},
 +
  publisher={Springer}
 +
}
 +
</bibtex>
 +
<bibtex>
 +
@article{schwarz1978estimating,
 +
  title={Estimating the dimension of a model},
 +
  author={Schwarz, G.},
 +
  journal={The annals of statistics},
 +
  volume={6},
 +
  number={2},
 +
  pages={461-464},
 +
  year={1978},
 +
  publisher={Institute of Mathematical Statistics}
 +
}
 +
</bibtex>
 +
<bibtex>
 +
@article{shimodaira1999multiple,
 +
  title={Multiple comparisons of log-likelihoods with applications to phylogenetic inference},
 +
  author={Shimodaira, H. and Hasegawa, M.},
 +
  journal={Molecular biology and evolution},
 +
  volume={16},
 +
  pages={1114-1116},
 +
  year={1999},
 +
  publisher={UNIVERSITY OF CHICAGO PRESS}
 +
}
 +
</bibtex>
 +
<bibtex>
 +
@article{vaida2005conditional,
 +
  title={Conditional Akaike information for mixed-effects models},
 +
  author={Vaida, F. and Blanchard, S.},
 +
  journal={Biometrika},
 +
  volume={92},
 +
  number={2},
 +
  pages={351--370},
 +
  year={2005},
 +
  publisher={Biometrika Trust}
 +
}
 +
</bibtex>
 +
<bibtex>
 +
@article{wald1943tests,
 +
  title={Tests of statistical hypotheses concerning several parameters when the number of observations is large},
 +
  author={Wald, A.},
 +
  journal={Transactions of the American Mathematical Society},
 +
  volume={54},
 +
  number={3},
 +
  pages={426-482},
 +
  year={1943},
 +
  publisher={JSTOR}
 +
}
 +
</bibtex>
 +
<bibtex>
 +
@article{wilks1938large,
 +
  title={The large-sample distribution of the likelihood ratio for testing composite hypotheses},
 +
  author={Wilks, S.S.},
 +
  journal={The Annals of Mathematical Statistics},
 +
  volume={9},
 +
  number={1},
 +
  pages={60-62},
 +
  year={1938},
 +
  publisher={JSTOR}
 +
}
 +
</bibtex>
 +
<bibtex>
 +
@article{zhao2006model,
 +
  title={On model selection consistency of Lasso},
 +
  author={Zhao, P. and Yu, B.},
 +
  journal={The Journal of Machine Learning Research},
 +
  volume={7},
 +
  pages={2541-2563},
 +
  year={2006},
 +
  publisher={JMLR. org}
 +
}
 +
</bibtex>
 +
 +
 +
{{Back&Next
 +
|linkBack=Visualization
 +
|linkNext=Simulation }}

Latest revision as of 15:22, 25 June 2013

Introduction

Defining the expression "model evaluation" is a harder task than it may appear at first glance. Intuitively, it would seem to suggest evaluating the performance of a model based on the observed data, the same data that was used to build the model. Fair enough, but what then do we mean by the "performance" of a model?

Do we mean the ability of the model to characterize and explain the phenomena being studied, in which case the goal is to use the model to understand the phenomena? Or do we mean the model's predictive performance when the model is used to predict the phenomena's behavior, either in the future or under new experimental conditions?

What is comes down to is this: do we want to use the model to understand or to predict? This is the key question to ask before even starting to think about what tools to use and tasks to execute.

Here, we will be for the most part focused on the ability of a model to explain the phenomena and data. Therefore, the first goal will be to check whether the data are in agreement with the model, and vice versa. In this process, model diagnostics can be used to eliminate model candidates that do not seem capable of reproducing the observed data. As is the usual case in statistics, it is not because a model has not been rejected that it is necessarily the "true" one. All that we can say is that the experimental data does not allow us to reject this model. It is merely one of perhaps many models that cannot be rejected. Indeed, we can usually find several models that get past this first diagnostic step and are therefore not rejected.

What to do, then, when several possible models are retained? Well, we can try to select the "best" one (or best ones if no leader distinguishes itself from the rest). This means developing a model selection process which allows us to compare the models to each other. Compare models? But with what criteria?

In a purely explanatory context, Occam's razor is a useful parsimony principle which states that among competing hypotheses, the one with the fewest assumptions should be selected. In a modeling context, this means that among valid competing models, the most simple one should be selected.

Model diagnostic tools are for the most part graphical or visual: we "see" when something is not right between a chosen model and the data it is hypothesized to describe. Model selection tools, on the other hand, are analytical: we calculate the value of some criteria that allows us to compare the models with each other. However, it is absolutely critical to keep in mind the limits of these tools. These are not decision-making tools. It is not a $p$-value or some information criteria that can automatically decide which model to choose. It is always the modeler who must have the last word! This person uses the model diagnostic and selection tools in order to guide their decision, but at the end, it is they that must make the final decision. There is nothing more dangerous that rules applied without thinking and arbitrary cut-off values used blindly without reflection.

Here, we will not look model selection techniques based on the various models' predictive performances. One such approach consists of splitting the data into three sets: a learning set is used for fitting the model, a validation set for choosing between models and a test set to assess the quality of the predictions made by the chosen model.

Very few model diagnostic and selection tools exist for mixed-effects models. One of the most complete is Xpose, an R-based model-building aid for population analysis using NONMEM that facilitates model diagnostics, candidate covariate identification and model comparison. Here we will use $\monolix$ and illustrate these techniques with the example used previously for model exploration and parameter estimation.


Model diagnostics


Model diagnostics and statistical tests

Suppose first that a model has been entirely defined by the modeler, and that its parameters have either been chosen or estimated. What we call the "model" is therefore a joint probability distribution along with some parameter values.


Note ${\cal M}_0$ the model we wish to evaluate. We place ourselves in the framework of statistical testing, and would like to perform the following hypothesis test:

\( H_0: \quad {\cal M}={\cal M}_0 \quad vs \quad H_1: \quad {\cal M}\neq{\cal M}_0. \)

"Passing" the test does not mean that we accept $H_0$ but rather that we do not reject it. We will use the same point of view for model diagnostics whereby we eliminate model candidates that do not seem capable of reproducing the observed data, i.e., models for which we conclude that ${\cal M}\neq{\cal M}_0$.


Remark

Running a statistical test only makes sense when we have doubts on the hypothesis of retaining a model. If it is clear from the beginning that for example the structural model is totally misspecified (e.g., we use a linear function of time even though a curvature is clearly visible in the data), any basic goodness-of-fit plot (individual fits, observation vs prediction, residuals, etc.) will detect this misspecification without any doubt and without the need to evaluate the probability of making a mistake.


To put into practice such a statistical test, we usually construct a test statistic $T(\by)$ which is a function of the observations and for which we are able to calculate a distribution under the null hypothesis $H_0$.

For a given significance level $\alpha$, we then define a rejection region $R_\alpha$ such that:

\(\probs{H_0}{T(\by) \in R_\alpha} = \alpha . \)

Thus, $\alpha$ is the probability of incorrectly rejecting the null hypothesis $H_0$.

The difficulty in creating and using such tests comes from two main things:


    • We need to be capable of calculating the distribution of the test statistic $T(\by)$ under $H_0$ in order to carefully track the significance level, i.e., ensure that the probability of incorrectly rejecting $H_0$ is indeed $\alpha$.

    • Being able to control the type I error $\alpha$ is of no interest if the test has low power, i.e., if the probability of correctly rejecting $H_0$ is low.


In the present context, the first point is clearly a problem. Due to the complexity of the models we are interested in, it is impossible to analytically calculate the distribution of a function of the observations, even for something as simple as the empirical mean $\overline{\by}=\sum_{i,j}y_{ij}/\sum_i{n_i}$. Using limit theorems to approximate such distributions is also more or less hopeless. Perhaps the most powerful, general and precise solution we have available to us is Monte Carlo simulation:


    • Generate independent values $\by^{(1)}, \by^{(2)}, \ldots , \by^{(L)}$ under the model ${\cal M}_0$ using the same design and covariates as in the original data.

    • Calculate the $L$ statistics $T(\by^{(1)}), T(\by^{(2)}), \ldots , T(\by^{(L)})$.

    • Estimate the distribution of $T(\by)$ under ${\cal M}_0$ with the empirical distribution of the $T(\by^{(\ell)})$.


The estimation error essentially depends on the number $L$ of simulated data sets. We must therefore choose $L$ large enough so that this error is negligible.

This solves part of the problem. But it remains hard to define a rejection region $R_\alpha$ if the test statistic is multidimensional. We can of course calculate relevant prediction intervals for each component of $T(\by)$ individually, but this does not really help us to define a real multidimensional rejection region.

Consequently, diagnostics methods are essentially visual: we compare the observed statistic $T(\by)$ with the expected distribution under ${\cal M}_0$ by graphically displaying (for example) 90% or 95% prediction intervals for each component of $T(\by)$.

The second point is also delicate because we need to decide what ${\cal M}\neq{\cal M}_0$ means, i.e., $H_0$ being false. Does it mean that the structural model is misspecified? Or the distribution of the random effects, the residual error model, the covariate model? There are so many ways in which a model can be misspecified that we cannot realistically expect to be able to create one unique statistic sufficiently powerful to detect all of these at once. We therefore prefer to construct several different test statistics, i.e., several graphical diagnostics tools, each good at dealing with one particular type of misspecification. It is then the combination of all these tools that will make up our test; we can fairly reasonably hope that a misspecified model will not succeed in passing through this filter.



Diagnostic plots using individual parameters

Diagnostic plots constructed using only the observations are useful for looking at the distribution $\qy$ of the observations, but do not help with testing hypotheses made on the non-observed individual parameters (about the distribution, covariate model, etc.).

One possible solution is to estimate the individual parameters (using for example the conditional mode) and then use these estimates to create new diagnostic tools. This strategy is only useful when the individual parameters have been estimated well.

If instead the data does not contain enough information to estimate certain individual parameters well, the individual estimates are all shrunk towards the same (population) value; this is the mode (resp. mean) of the population distribution of the parameter if we use the conditional mode (resp. conditional mean). For a parameter $\psi_i$ which is a function of a random effect $\eta_i$, we can quantify this phenomena by defining the so-called $\eta$-shrinkage as

\( \eta\text{-shrinkage} = 1 - \displaystyle{\frac{\var{\hat{\eta}_i} }{\var{\eta_i} } }, \)

where $\hat{\eta}_i$ is an estimate of $\eta_i$ (the conditional mode, conditional mean, etc.).

This shrinkage phenomenon is simple to understand because the conditional distribution $\qcetaiyi$ of $\eta_i$ is defined by the product $\pmacro(y_i|\eta_i)\pmacro(\eta_i)$. Saying that the observations $y_i$ provides little information about $\eta_i$ means that the conditional distribution of $y_i$ has a reduced importance in the construction of $\qcetaiyi$. The mode (resp. mean) of $\qcetaiyi$ will therefore be close to 0 which is both the mode and mean of $\qetai$. The result is a high level of shrinkage (close to 1) whenever $\var{\hat{\eta}_i}\ll\var{\eta_i}$.

Estimates of the $\psi_i$ are therefore biased because they do not correctly reflect the marginal distribution $\qpsii$ (in particular, their variance is much less). A particularly effective solution is to simulate the individual parameters $\psi_i$ with the conditional distribution $\qcpsiiyi$ rather than taking the mode. The resulting estimator is unbiased in the following sense:

\(\begin{eqnarray} \pmacro(\psi_i) &=& \displaystyle{ \int \pmacro(\psi_i | y_i )\pmacro( y_i ) d\, y_i }\\ &=& \esps{y_i}{\pmacro(\psi_i | y_i )} . \end{eqnarray}\)

This relationship is a fundamental one when considering inverse problems, incomplete data models, mixed-effects models, etc. So what does it imply exactly? Well, if we randomly draw a vector $y_i$ of observations for an individual in a population and then generate a vector $\psi_i$ using the conditional distribution $\qcpsiiyi$, then the distribution of $\psi_i$ is the population distribution $\qpsii$. In other words, even if each $\psi_i$ is simulated using its own conditional distribution, the fact of pooling them allows us to look at them as if they were a sample from $\qpsii$, i.e., the marginal distribution $\qpsii$ is a mixture of conditional distributions $\qcpsiiyi$.

The procedure is therefore as follows: we generate several values from each conditional distribution $\qcpsiiyi$ using the Metropolis-Hastings algorithm, and use them in addition to the observations in order to build various diagnostic plots.



An example

We are going to use the same model used for model exploration in the Visualization section and parameter estimation in the Maximum likelihood estimation of the population parameters chapter.


The structural model that defines the concentration in the central compartment and the hazard function for the events (hemorrhaging) is

\(\begin{eqnarray} Cc(t) &=& \displaystyle{ \frac{D \, ka}{V(ka-Cl/V)} }\left(e^{-(Cl/V)\,t} - e^{-ka\,t} \right) \\ h(t) &=& h_0 \, \exp(\gamma\, Cc(t)) . \end{eqnarray}\)

The statistical model assumes that $ka_i$ and $V_i$ are log-normally distributed, $Cl_i$ normal, $h0_i$ probit-normal and $\gamma$ logit-normal. No covariates are used in this model. Lastly, we suppose a constant residual error model. Now we are going to review several different diagnostic plots and look at the conclusions that can be made using them.


1) Individual fits.

In the continuous data model $y_{ij}=f(t_{ij};\psi_i) + f(t_{ij};\psi_i)\teps_{ij}$, estimation of the population parameters $\psi_{\rm pop}$ and individual parameters $\psi_{i}$ allows us to compute for each individual:

    • $f(t ; \hat{\psi}_{\rm pop})$, the predicted profile given by the estimated population model

    • $f(t ; \hat{\psi}_{i})$, the predicted profile given by the estimated individual model.


The figure plotting for each individual the two curves for the predicted concentration shows evidence of inter-individual variability in the kinetics, and furthermore does not allow us to reject the proposed PK model since the fits seem acceptable.
Indfits1.png


2) Observations vs predictions.

The population and individual models also allow us to calculate for each individual predictions at the observation times $f(t_{ij} ; \hat{\psi}_{\rm pop})$ and $f(t_{ij} ; \hat{\psi}_{i})$.


The figure showing observations vs individuals reveals no obvious misspecification. In particular it does not allow us to reject the constant residual error model.
Obspred1.png


3) Residuals.

Several types of residuals can be defined: population weighted residuals $({\rm PWRES}_{ij})$, individual weighted residuals $({\rm IWRES}_{ij})$, normalised prediction distribution errors $({\rm NPDE}_{ij})$, etc. The first two are defined by:

\(\begin{eqnarray} {\rm PWRES}_{ij} &=& \frac{y_{ij} - \hat{\mathbb{E} }(y_{ij})}{\hat{\rm std}(y_{ij})} \\ {\rm IWRES}_{ij} &=& \frac{y_{ij} - f(t_{ij} ; \hat{\psi}_{i})}{g(t_{ij} ; \hat{\psi}_{i})}, \end{eqnarray}\)

where $\hat{\mathbb{E}}(y_{ij})$ and $\hat{\rm std}(y_{ij})$ are the mean and variance of $y_{ij}$ estimated by Monte Carlo. ${\rm NPDE}_{ij}$ is a nonparametric version of ${\rm PWRES}_{ij}$, based on a rank statistic. See npde for more details.

Statistics useful for summarizing the residuals are the 10%, 50% (median) and 90% quantiles. The procedure described earlier for estimating prediction intervals of these quantiles using Monte Carlo can again be used.


Both the individual residuals and the NPDEs suggest that the model is misspecified. Indeed, under ${\cal M}_0$ residuals are expected to behave as i.i.d. ${\cal N}(0,1)$ random variables, which is clearly not the case here. It is nevertheless difficult to identify the reasons for this misspecification using only these figures.
Residual1.png



These plots show at each observation time the order 10%, 50% and 90% quantiles of the IWRES and NPDE.

The 90% prediction intervals are also displayed. These plots are more informative than the original residual plots.

We can now reasonably conclude that the behavior of the three quantiles is not the one expected under ${\cal M}_0$. In particular, a proportional component in the residual error model appears not to have been taken into account.
Residual2.png


4) The distributions of the individual parameters.

The hypotheses we have made about the distributions of the individual parameters can be tested by visually comparing the pdf of the pre-selected distribution of each parameter with the empirical distribution of that parameter. We are going to see that using the estimated individual parameters does not allow us to construct a pertinent diagnostic plot, and that we must rather use parameters simulated with the conditional distribution $\qcpsiiyi$ for each individual.


The plots show for each model parameter the pdf obtained from the estimated parameters and the empirical distribution, shown as a histogram, of the estimated individual parameters (here the estimated parameters are the modes of the conditional distributions).
Distparam0.png



Instead of histograms, $\monolix$ can also display the empirical distribution of a continuous variable using nonparametric density estimation. This is a better way to represent continuous distributions than a histogram.


It is also possible to display the $\eta$-shrinkage for each parameter. As expected, $\eta$-shrinkage is large for the parameters associated with the time-to-events process. That does not mean that the statistical models for $h_0$ and $\gamma$ are misspecified, but that the data does not allow us to correctly recover these individual parameters.
Distparam1.png


The simulated individual parameters now allow us to construct a diagnostic tool for the distributions of the individual parameters. Only the distribution of the clearance $Cl$ would appear to be rejected. Though the model had supposed a normal distribution, the simulated parameters seem to suggest an asymmetric distribution. This diagnostic plot leads us to think about testing the hypothesis of a log-normal distribution for $Cl$.
Distparam2.png


5) Covariate model.

The model assumes that for a given individual parameter $\psi_i$, there exists a function $h$ such that $h(\psi_i) = h(\psi_{\rm pop}) + \eta_i$, where the random effects are i.i.d. Gaussian variables. We can then graphically display the random effects simulated with the conditional distributions as a function of the various covariates in order to see whether this hypothesis is valid.


These plots clearly show that the simulated random effects for $V$ and $Cl$ are correlated with the weight and have different distributions depending on gender. The assumption that volume $V$ and clearance $Cl$ are independent of weight should be rejected. The statistical model also needs to take into account the fact that both predicted volume and clearance increase with weight.
Eval covariate2.png


6) The correlation model. The model assumes that for a given individual, the random effects associated with the each individual parameter are independent. We can plot each pair of random effects simulated with the conditional distributions against each other to see if this hypothesis is valid.


The various point clouds show no correlation between random effects except for $(\eta_V,\eta_{Cl})$ and perhaps $(\eta_{ka},\eta_V)$.
Correl1.png


7) Visual predictive checks.

A VPC is a diagnostic tool well suited to continuous data. It allows us to summarize in the same figure the structural and statistical models. The VPC shown uses the order 10%, 50% and 90% for the observations after having regrouped them into bins for successive intervals. Then, prediction intervals for these quantiles under ${\cal M}_0$ are estimated using Monte Carlo.


To make it easier to interpret VPCs, we represent in red the zones where the observed quantiles are outside the prediction intervals. Here, the structural model seems to be ok, but the statistical model exhibits some incoherencies. In particular, the three quantiles obtained using the observations appear much closer together than the model ${\cal M}_0$ would suggest. This adds weight to the suggestion that a proportional component should be added to the error model.
Vpc4.png


8) Kaplan Meier plots.

Special diagnostic plots need to be defined for non-continuous observations. We can for survival (time-to-event) data use Kaplan Meier plots (for the first event) as a statistic, and/or the average cumulated number of events per individual (i.e., the mean number of observed events before time $t$). The prediction intervals for these statistics can be estimated by Monte Carlo.


The shapes of the curves seem correct. The model appears to slightly overestimate the survival function after the 15 hr mark, but it is difficult at this point to decide whether this comes from the time-to-event model itself, the statistical model or the model for the concentration.
Km2.png


In summary, this ensemble of diagnostic plots has suggested to us that we should suppose:


  • a log-normal distribution for $Cl$
  • a combined residual error model, e.g., $y=f+(a+b*f)\teps$
  • a statistical model for $Cl$ and $V$ which incorporates weight as a covariate, assuming for instance a linear relationship between $\log(V)$ and $\log({\rm weight})$ and a linear relationship between $\log(Cl)$ and $\log({\rm weight})$
  • a linear correlation between $\log(Cl)$ and $\log(V)$.


This new model can be easily implemented in $\monolix$. Population parameters and individual parameters are then estimated anew and new diagnostic plots drawn. A few of these are displayed below and clearly show that the new model is better than the previous one, and can likely be retained as a candidate model.

Diagnostic.png



Model selection


Statistical tools for model selection

Statistical tools for model selection include information criteria (AIC and BIC) and hypothesis tests such as the Wald test and likelihood ratio test (LRT).

The Akaike Information Criteria (AIC) and the Bayesian information Criteria (BIC) are defined by

\(\begin{eqnarray} AIC &=& - 2 {\llike}(\theta;\by) + 2 P \\ BIC &=& - 2 {\llike}(\theta;\by) + \log(N) P , \end{eqnarray}\)

where $P$ is the total number of parameters to be estimated and $N$ the number of subjects. The models being compared using AIC or BIC need not be nested, unlike the case when models are being compared using an F-test or likelihood ratio test.


Remarks

  • Surprisingly, the formula for calculating the BIC differs from one software to another. The reason is that the effective sample size is not clearly defined in the context of mixed-effects models. The question is whether we should use the number of subjects $N$ or the total number of observations $n_{\mathrm{tot} }$ in the penalty term. The penalty using $n_{\mathrm{tot} }$ is implemented in the R package nlme and the SPSS procedure MIXED , while $N$ is used in saemix for $\monolix$, and in SAS proc NLMIXED.
An appropriate decomposition of the complete log-likelihood combined with the Laplace approximation can be used to derive the asymptotic BIC approximation. This leads to an optimal BIC penalty based on two terms proportional to $\log N$ and $\log n_{\mathrm{tot} }$ that adapts to the mixed-effects structure of the model. This new approach is not implemented yet in any software.


  • AIC and BIC are justified based on asymptotic criteria (the AIC heuristic uses Wilks' theorem and BIC uses Bayesian statistics), i.e., when the number of individuals increases and the model dimension stays fixed. In the alternative, non-asymptotic approach, the model size can increase freely. The form of the penalty can differ from one model to the next in this framework. It can be shown for example that for certain Gaussian models, the penalty term has the form $c_1P + c_2P\log(N/P)$. The problem then becomes to calibrate the coefficients $c_1$ and $c_2$ in order to obtain an optimal penalty, which is not necessarily a simple task, making it harder to use this approach in real applications.


The observed log-likelihood ${\llike}(\theta;\by) = \log(\py(\by;\theta))$ cannot be computed in closed form for nonlinear mixed-effects models. It can be estimated by Monte Carlo using the importance sampling algorithm described in the Estimation of the log-likelihood section.

When comparing two nested models ${\cal M}_0$ and ${\cal M}_1$ with dimensions $P_0$ and $P_1$ (with $P_1>P_0$), the likelihood ratio test (LRT) uses the test statistic

\( LRT = 2 ( {\llike}(\hthetag_1;\by) - {\llike}(\hthetag_0;\by) ) , \)

where $\hthetag_0$ and $\hthetag_1$ are the MLE of $\theta$ under ${\cal M}_0$ and ${\cal M}_1$.

Depending on the hypotheses, the limit distribution of $LRT$ is either a $\chi ^2$ distribution or a mixture of a $\chi^2$ distribution and a Dirac $\delta$ distribution. For example:


    - Testing whether some fixed effects are null and assuming the same covariance structure for the random effects implies that

    \( LRT \limite{N\to \infty}{} \chi^2(P_1-P_0) . \)


    - Testing whether some correlations in the covariance matrix $\IIV$ are null and assuming the same covariate model implies that

    \( LRT \limite{N\to \infty}{} \chi^2(P_1-P_0) . \)


    - Testing whether the variance of one of the random effects is zero and assuming the same covariate model implies that

    \( LRT \limite{N\to \infty}{} \displaystyle{ \frac{1}{2} }\chi^2(1) + \displaystyle{ \frac{1}{2} }\delta_0 . \)


Statistical tests, as is the case for BIC, help us decide whether the difference between two models is statistically significant. Suppose that we want to test whether a fixed model parameter $\beta$ is null:

\( H_0: \quad \beta=0 \quad vs \quad H_1: \quad \beta\neq0 . \)

We construct a statistical test $T$ for which the distribution under $H_0$ allows us to calculate a $p$-value, i.e., the probability that $T$ is at least as big as the value observed under $H_0$. A small $p$-value leads us to reject $H_0$ with high confidence. It is usual to use the arbitrary cutoff of 5% to make this decision: we frequently read statements such as "a decrease in the objective function of a least 3.84 was required to identify a signifiant covariate". In the same way we could select models based on their BIC values under $H_0$ and $H_1$ by providing an arbitrary decision rule. It is sometimes suggested to choose $H_1$ if the difference $BIC_{H_1} -BIC_{H_0}$ is inferior to a certain arbitrary cutoff.

These approaches seem to simplify the modeler's life because they provide decision rules that can be applied systematically without thinking and thus justify decisions. But a rule, whatever it is, should never stop us asking why we are applying it and whether it is applicable in the present case. Remember that even a very small difference will be statistically significant if the sample size is large enough. The question is thus not to know whether a difference is statistically significant, but whether it is physically or biologically significant. We must therefore look carefully at the size of an effect and its real impact, both for understanding the model and for understanding its impact on the model's predictive capacities.



An example

We are going to continue using our joint model for concentration data and hemorrhaging events. The structural model is chosen to be the one used earlier for the diagnostic tests. We would now like to compare several statistical models:

${\cal M}_1$: all the individual parameters are log-normally distributed, there is no correlation between individual parameters, the residual error model is a combined one ($y=f+(a+b*f)*\teps$), both $\log(V)$ and $\log(Cl)$ are linear functions of $\log({\rm weight})$.

${\cal M}_2$: model ${\cal M}_1$, assuming that $\log(V_i)$ and $\log(Cl_i)$ are linearly correlated.

${\cal M}_3$: model ${\cal M}_2$, assuming that $\gamma_i=\gamma_{\rm pop}$ (i.e., $\omega_\gamma = 0$).

${\cal M}_4$: model ${\cal M}_2$, assuming that $Cl$ is not a function of the weight.


The results for these four models are as follows:


Model $-2\times {\llike}$ BIC
${\cal M}_1$ 1390 1451
${\cal M}_2$ 1370 1436
${\cal M}_3$ 1370 1432
${\cal M}_4$ 1446 1477


Clearly models ${\cal M}_2$ and ${\cal M}_3$ are the best in terms of BIC. We can therefore envisage selecting a model that includes both a correlation between $\log(V)$ and $\log(Cl)$ and that supposes that the volume $V$ is a function of the weight.

Rather than a purely statistical criteria (LRT or BIC) it is particularly the estimated value of the standard deviation of $\gamma$ ($\hat{\omega}_\gamma = 0.002$) under the model ${\cal M}_2$ that would lead us to conclude that the inter-individual variability of $\gamma$ is negligible.

It is also important to try and evaluate the impact that a bad decision could have. Retaining $\hat{\omega}_\gamma = 0.002$ would have little impact on predictions because $\gamma_i$ is log-normally distributed, representing a variability of around 0.2%. Models ${\cal M}_2$ and ${\cal M}_3$ are thus practically identical and there is no particular advantage of selecting one over the other.



Bibliography

Akaike, H. - Information measures and model selection

Bulletin of the International Statistical Institute 50(1):277-291,1983
Bibtex
Author : Akaike, H.
Title : Information measures and model selection
In : Bulletin of the International Statistical Institute -
Address :
Date : 1983

Arlot, S., Celisse, A. - A survey of cross-validation procedures for model selection

Statistics Surveys 4:40 - 79,2010
Bibtex
Author : Arlot, S., Celisse, A.
Title : A survey of cross-validation procedures for model selection
In : Statistics Surveys -
Address :
Date : 2010

Berger, J.O., Pericchi, L.R. - The intrinsic Bayes factor for model selection and prediction

Journal of the American Statistical Association 91(433):109-122,1996
Bibtex
Author : Berger, J.O., Pericchi, L.R.
Title : The intrinsic Bayes factor for model selection and prediction
In : Journal of the American Statistical Association -
Address :
Date : 1996

Bergstrand, M., Hooker, A.C., Wallin, J.E., Karlsson, M.O. - Prediction-corrected visual predictive checks for diagnosing nonlinear mixed-effects models

The AAPS journal 13(2):143-151,2011
Bibtex
Author : Bergstrand, M., Hooker, A.C., Wallin, J.E., Karlsson, M.O.
Title : Prediction-corrected visual predictive checks for diagnosing nonlinear mixed-effects models
In : The AAPS journal -
Address :
Date : 2011

Burnham, K.P., Anderson, D.R. - Model selection and multi-model inference: a practical information-theoretic approach

Springer Verlag,2002
Bibtex
Author : Burnham, K.P., Anderson, D.R.
Title : Model selection and multi-model inference: a practical information-theoretic approach
In : -
Address :
Date : 2002

Comets,E., Brendel,K. - Model evaluation in nonlinear mixed effect models, with applications to pharmacokinetics

Journal de la Société Française de Statistique 151(1):106-128,2010
Bibtex
Author : Comets,E., Brendel,K.
Title : Model evaluation in nonlinear mixed effect models, with applications to pharmacokinetics
In : Journal de la Société Française de Statistique -
Address :
Date : 2010

Comets, E., Brendel, K., Mentré, F. - Computing normalised prediction distribution errors to evaluate nonlinear mixed-effect models: the npde add-on package for R

Computer methods and programs in biomedicine 90(2):154-166,2008
Bibtex
Author : Comets, E., Brendel, K., Mentré, F.
Title : Computing normalised prediction distribution errors to evaluate nonlinear mixed-effect models: the npde add-on package for R
In : Computer methods and programs in biomedicine -
Address :
Date : 2008

Hausman, J., McFadden, D. - Specification tests for the multinomial logit model

Econometrica: Journal of the Econometric Society pp. 1219-1240,1984
Bibtex
Author : Hausman, J., McFadden, D.
Title : Specification tests for the multinomial logit model
In : Econometrica: Journal of the Econometric Society -
Address :
Date : 1984

Hooker, A., Karlsson, M.O., Jonsson, E.N. - Model diagnostic using XPOSE

http://xpose.sourceforge.net/generic\_chm/xpose.VPC.html
Bibtex
Author : Hooker, A., Karlsson, M.O., Jonsson, E.N.
Title : Model diagnostic using XPOSE
In : -
Address :
Date :

Karlsson, M.O., Holford, N. - A Tutorial on Visual Predictive Checks

PAGE 2008 ,2008
http://www.page-meeting.org/pdf\_assets/8694-Karlsson_Holford_VPC_Tutorial_hires.pdf
Bibtex
Author : Karlsson, M.O., Holford, N.
Title : A Tutorial on Visual Predictive Checks
In : PAGE 2008 -
Address :
Date : 2008

Lavielle, M., Bleakley, K. - Automatic data binning for improved visual diagnosis of pharmacometric models

Journal of pharmacokinetics and pharmacodynamics 38(6):861-871,2011
Bibtex
Author : Lavielle, M., Bleakley, K.
Title : Automatic data binning for improved visual diagnosis of pharmacometric models
In : Journal of pharmacokinetics and pharmacodynamics -
Address :
Date : 2011

Massart, P. - Concentration Inequalities and Model Selection, Ecole d’Eté de Probabilités de Saint-Flour XXXIII-2003 Lecture Notes in Mathematics 1896

,2007
Bibtex
Author : Massart, P.
Title : Concentration Inequalities and Model Selection, Ecole d’Eté de Probabilités de Saint-Flour XXXIII-2003 Lecture Notes in Mathematics 1896
In : -
Address :
Date : 2007

Neyman, J., Pearson, E.S. - On the problem of the most efficient tests of statistical hypotheses

Springer,1992
Bibtex
Author : Neyman, J., Pearson, E.S.
Title : On the problem of the most efficient tests of statistical hypotheses
In : -
Address :
Date : 1992

Schwarz, G. - Estimating the dimension of a model

The annals of statistics 6(2):461-464,1978
Bibtex
Author : Schwarz, G.
Title : Estimating the dimension of a model
In : The annals of statistics -
Address :
Date : 1978

Shimodaira, H., Hasegawa, M. - Multiple comparisons of log-likelihoods with applications to phylogenetic inference

Molecular biology and evolution 16:1114-1116,1999
Bibtex
Author : Shimodaira, H., Hasegawa, M.
Title : Multiple comparisons of log-likelihoods with applications to phylogenetic inference
In : Molecular biology and evolution -
Address :
Date : 1999

Vaida, F., Blanchard, S. - Conditional Akaike information for mixed-effects models

Biometrika 92(2):351--370,2005
Bibtex
Author : Vaida, F., Blanchard, S.
Title : Conditional Akaike information for mixed-effects models
In : Biometrika -
Address :
Date : 2005

Wald, A. - Tests of statistical hypotheses concerning several parameters when the number of observations is large

Transactions of the American Mathematical Society 54(3):426-482,1943
Bibtex
Author : Wald, A.
Title : Tests of statistical hypotheses concerning several parameters when the number of observations is large
In : Transactions of the American Mathematical Society -
Address :
Date : 1943

Wilks, S.S. - The large-sample distribution of the likelihood ratio for testing composite hypotheses

The Annals of Mathematical Statistics 9(1):60-62,1938
Bibtex
Author : Wilks, S.S.
Title : The large-sample distribution of the likelihood ratio for testing composite hypotheses
In : The Annals of Mathematical Statistics -
Address :
Date : 1938

Zhao, P., Yu, B. - On model selection consistency of Lasso

The Journal of Machine Learning Research 7:2541-2563,2006
Bibtex
Author : Zhao, P., Yu, B.
Title : On model selection consistency of Lasso
In : The Journal of Machine Learning Research -
Address :
Date : 2006


Back.png
Forward.png