Difference between revisions of "TestMarc1"

From Popix
Jump to navigation Jump to search
m
m
 
(14 intermediate revisions by the same user not shown)
Line 239: Line 239:
  
  
{{OutlineTextL
+
{{OutlineText
 
|text=  
 
|text=  
 
- A model is a joint probability distribution.  
 
- A model is a joint probability distribution.  
Line 330: Line 330:
 
where $\qyj$ is the normal distribution defined in [[#ex_proba1|(1.4)]].  
 
where $\qyj$ is the normal distribution defined in [[#ex_proba1|(1.4)]].  
 
}}
 
}}
 +
 +
 +
{{Example2
 +
|title=Example:
 +
|text= 500 mg of a drug is given by intravenous bolus to a patient at time 0. We assume that the evolution of the plasmatic concentration of the drug over time is described by the pharmacokinetic (PK) model
 +
 +
{{Equation1
 +
|equation=<math> f(t;V,k) = \displaystyle{ \frac{500}{V} }e^{-k \, t} , </math> }}
 +
 +
where $V$ is the volume of distribution and $k$ the elimination rate constant. The concentration is measured at times $(t_j, 1\leq j \leq n)$ with additive residual errors:
 +
 +
{{Equation1
 +
|equation=<math> y_j = f(t_j;V,k) + e_j , \quad 1 \leq j \leq n .  </math> }}
 +
 +
Assuming that the residual errors $(e_j)$ are independent and normally distributed with constant variance $a^2$, the observed values $(y_j)$ are also independent random variables and
 +
 +
{{EquationWithRef
 +
|equation=<div id="ex_proba1" ><math>
 +
y_j \sim {\cal N} \left( f(t_j ; V,k) , a^2 \right), \quad 1 \leq j \leq n. </math></div>
 +
|reference=(1.4) }}
 +
 +
Here, the vector of parameters $\psi$ is $(V,k,a)$. $V$ and $k$ are the PK  parameters for the structural PK model  and $a$  the residual error parameter.
 +
As the $y_j$ are independent, the joint distribution of $y$ is the product of their marginal distributions:
 +
 +
{{Equation1
 +
|equation=<math> \py(y ; \psi,\vt) = \prod_{j=1}^n \pyj(y_j ; \psi,t_j) ,
 +
</math> }}
 +
 +
where $\qyj$ is the normal distribution defined in [[#ex_proba1|(1.4)]]. }}
  
 
=== A model for several individuals ===
 
=== A model for several individuals ===
Line 343: Line 372:
  
  
{{OutlineTextL
+
{{OutlineText
 
|text=  
 
|text=  
 
- In this context, the model  is the joint distribution of the observations and the individual parameters:
 
- In this context, the model  is the joint distribution of the observations and the individual parameters:
Line 359: Line 388:
 
<!-- %$${\rm CI}(\psi_k) = [\hatpsi_k - \widehat{\rm s.e}(\hatpsi_k)q((1-\alpha)/2,n-d) , \hatpsi_k + \widehat{\rm s.e}(\hatpsi_k)q((1+\alpha)/2,n-d)]$$ -->
 
<!-- %$${\rm CI}(\psi_k) = [\hatpsi_k - \widehat{\rm s.e}(\hatpsi_k)q((1-\alpha)/2,n-d) , \hatpsi_k + \widehat{\rm s.e}(\hatpsi_k)q((1+\alpha)/2,n-d)]$$ -->
 
<!--  %where $q(\alpha,\nu)$ is the quantile of order $\alpha$ of a $t$-distribution with $\nu$ degrees of freedom. -->
 
<!--  %where $q(\alpha,\nu)$ is the quantile of order $\alpha$ of a $t$-distribution with $\nu$ degrees of freedom. -->
 +
}}
 +
 +
<br>
 +
== Examples With Equations/Code/Tables ==
 +
 +
{{ExampleWithCode
 +
|title1= Example 1:
 +
|title2= Poisson model with time varying intensity
 +
|text=
 +
 +
|equation=<math> \begin{array}{c}
 +
\psi_i &=& (\alpha_i,\beta_i) \\[0.3cm]
 +
\lambda(t,\psi_i) &=& \alpha_i + \beta_i\,t \\[0.3cm]
 +
\prob{y_{ij}=k} &=& \displaystyle{ \frac{\lambda(t_{ij} , \psi_i)^k}{k!} } e^{-\lambda(t_{ij} , \psi_i)}\\
 +
\end{array}</math>
 +
|code=
 +
{{MLXTranForTable
 +
|name=
 +
|text=
 +
<pre style=" background-color:#EFEFEF; border: none;">
 +
INPUT:
 +
input = {alpha, beta}
 +
 +
EQUATION:
 +
lambda = alpha + beta*t
 +
 +
DEFINITION:
 +
y ~ poisson(lambda)
 +
</pre> }}
 +
}}
 +
 +
 +
{{ExampleWithTable1bis
 +
|title1= Example 1:
 +
|title2= Poisson model with time varying intensity
 +
|text=
 +
 +
|equation=<math> \begin{array}{c}
 +
\psi_i &=& (\alpha_i,\beta_i) \\[0.3cm]
 +
\lambda(t,\psi_i) &=& \alpha_i + \beta_i\,t \\[0.3cm]
 +
\prob{y_{ij}=k} &=& \displaystyle{ \frac{\lambda(t_{ij} , \psi_i)^k}{k!} } e^{-\lambda(t_{ij} , \psi_i)}\\
 +
\end{array}</math>
 +
|code=
 +
{{MLXTranForTable
 +
|name=
 +
|text=
 +
<pre style=" background-color:#EFEFEF; border: none;">
 +
INPUT:
 +
input = {alpha, beta}
 +
 +
EQUATION:
 +
lambda = alpha + beta*t
 +
 +
DEFINITION:
 +
y ~ poisson(lambda)
 +
</pre> }}
 +
}}
 +
 +
{{ExampleWithTable1ter
 +
|title1= Example 1:
 +
|title2= Poisson model with time varying intensity
 +
|text=
 +
 +
|equation=<math> \begin{array}{c}
 +
\psi_i &=& (\alpha_i,\beta_i) \\[0.3cm]
 +
\lambda(t,\psi_i) &=& \alpha_i + \beta_i\,t \\[0.3cm]
 +
\prob{y_{ij}=k} &=& \displaystyle{ \frac{\lambda(t_{ij} , \psi_i)^k}{k!} } e^{-\lambda(t_{ij} , \psi_i)}\\
 +
\end{array}</math>
 +
|code=
 +
{{MLXTranForTable
 +
|name=
 +
|text=
 +
<pre style=" background-color:#EFEFEF; border: none;">
 +
INPUT:
 +
input = {alpha, beta}
 +
 +
EQUATION:
 +
lambda = alpha + beta*t
 +
 +
DEFINITION:
 +
y ~ poisson(lambda)
 +
</pre> }}
 
}}
 
}}
  
  
{{ExampleWithTable1
+
{{ExampleWithTable_4
 
|title1= Example 1:  
 
|title1= Example 1:  
 
|title2= Poisson model with time varying intensity
 
|title2= Poisson model with time varying intensity
Line 389: Line 500:
  
  
{{Example
+
{| cellpadding="5" cellspacing="5"
|title= Example 1:
+
| style="width:550px;" |
|text =
+
{{RcodeForTable
In this example, the individual parameter $\psi_i$ is the ''volume of distribution'' $V_i$, which  we could assume to be $\log$-normally distributed. The weight $w_i$ (kg) can be used to explain part of the variability of the volume between individuals:
+
|name=
 +
|code=
 +
<pre style="background-color: #EFEFEF; border:none">
 +
fmin1=function(x,y,t)
 +
{f=predc1(t,x)
 +
g=x[4]
 +
e=sum( ((y-f)/g)^2 + log(g^2))
 +
}
 +
 
 +
fmin2=function(x,y,t)
 +
{f=predc2(t,x)
 +
g=x[4]
 +
e=sum( ((y-f)/g)^2 + log(g^2))
 +
}
 +
 
 +
#--------- MLE --------------------------------
 +
 
 +
pk.nlm1=nlm(fmin1, c(0.3,6,0.2,1), y, t, hessian="true")
 +
psi1=pk.nlm1$estimate
 +
 
 +
pk.nlm2=nlm(fmin2, c(3,10,0.2,4), y, t, hessian="true")
 +
psi2=pk.nlm2$estimate
 +
</pre>
 +
}}
 +
| style="width:550px;" |
 +
:Here are the parameter estimation results:
 +
 
 +
 
 +
{{JustCodeForTable
 +
|code=
 +
<pre style="background-color: #EFEFEF; border:none; color:blue">
 +
> cat(" psi1 =",psi1,"\n\n")
 +
psi1 = 0.3240916 6.001204 0.3239337 0.4366948
 +
 
 +
> cat(" psi2 =",psi2,"\n\n")
 +
psi2 = 3.203111 8.999746 0.229977 0.2555242
 +
</pre> }}
 +
|}
 +
 
 +
== Equations ==
 +
 
 +
 
 +
Here are some examples of these various types of data:
 +
 
 +
 
 +
* Continuous data with a normal distribution:
 +
 
 +
{{EquationWithBorder| <math>y_{ij} \sim {\cal N}\left(f(t_{ij},\psi_i),\, g^2(t_{ij},\psi_i)\right)</math>  }}
 +
 
 +
:Here, $\lambda(t_{ij},\psi_i)=\left(f(t_{ij},\psi_i),\,g(t_{ij},\psi_i)\right)$, where $f(t_{ij},\psi_i)$ is the mean and $g(t_{ij},\psi_i)$ the standard deviation of $y_{ij}$.
  
  
{{EquationWithRef
 
|equation=<div id="indiv_cov4"><math>
 
\log(V_i) = \log (V_{\rm pop}) + \beta (\log(w_i) -\log(70)) + \eta_{i},
 
</math></div>
 
|reference=(2.15) }}
 
  
where $\eta_{i} \sim {\cal N}(0, \omega_V^2)$.
+
* Categorical data with a Bernoulli distribution:
  
Here, the covariate used in the statistical model is the log-weight and the reference weight that we decide to choose is  $70$kg.
+
{{EquationWithBorder|<math> y_{ij} \sim {\cal B}\left(\lambda(t_{ij},\psi_i)\right) </math> }}
Of course, it would be absolutely equivalent to define the covariate as $c_i=\log(w_i/70)$. Then, the reference value of this covariate would become $c_{\rm pop}=0$  for an individual of 70kg, and model [[#indiv_cov4|(2.15)]] can instead be written
 
  
{{Equation1
+
:Here, $\lambda(t_{ij},\psi_i)$ is the probability that $y_{ij}$ takes the value 1.
|equation=<math> \log(V_i) = \log (V_{\rm pop}) + \beta \, \log(w_i/70) + \eta_{i}. </math> }}
 
  
The same model can be expressed in different ways. For instance, taking the exponential gives a model in terms of $V_i$:
 
  
{{Equation1
+
{{EquationWithRef
|equation=<math> V_i  = \Vpop \left(\displaystyle{ \frac{w_i}{70} }\right)^{\beta} \, e^{\eta_{i} }. </math> }}
+
|equation=<div it="myRef"><math>
 +
y_{ij} \sim {\cal N}\left(f(t_{ij},\psi_i),\, g^2(t_{ij},\psi_i)\right)
 +
</math></div>
 +
|reference=(2.1) }}
  
Here, the predicted volume for an individual with weight $w_i$ is
 
  
 
{{Equation1
 
{{Equation1
|equation=<math> \pred{V}_i = \Vpop \left(\displaystyle{ \frac{w_i}{70} }\right)^{\beta}. </math> }}
+
|equation= <math>y_{ij} \sim {\cal N}\left(f(t_{ij},\psi_i),\, g^2(t_{ij},\psi_i)\right)</math> }}
The right-hand side panel of the figure shows how the predicted volume $\pred{V}$ increases with weight  $w$ for different values of $\beta$. Here, $\Vpop$ has been set at 10. For $\beta$ not equal to 0 or 1, the model is not linear. However, the predicted $\log$-volume (left-hand side panel) does increase linearly with the $\log$-weight:
 
  
{{Equation1
+
{{EquationWithBorder|equation=<math> \like(\theta ; \psi_1,\psi_2,\ldots, \psi_N) \ \ \eqdef \ \ \prod_{i=1}^{N}\ppsii(\psi_i ; c_i , \theta). </math> }}  
|equation=<math> \log(\pred{V}_i) = \log(\Vpop) + \beta  \, \log(w_i/70). </math> }}
 
  
  
[[File:covariate1b.png]]
+
{{EquationWithBorder
 +
|equation= <math> \like(\theta ; \psi_1,\psi_2,\ldots, \psi_N) \ \ \eqdef \ \ \prod_{i=1}^{N}\ppsii(\psi_i ; c_i , \theta). </math> }}
  
  
Of course this model is not unique: there exist several possible transformations of the weight that ensure that the predicted volume increases with weight. Setting for example $c_i=w_i-70$ assumes that the predicted log-volume increases linearly with the weight. These two covariate models give very similar predictions for $\beta$ close to 1 (which is a typical value for PK applications).
+
{{EquationWithBorder
 +
|equation= <math> {\like}(\theta ; \psi_1,\psi_2,\ldots, \psi_N) \ \ \eqdef \ \ \prod_{i=1}^{N}\ppsii(\psi_i ; c_i , \theta). </math> }}
  
  
[[File:covariate2b.png]]
+
{{ImageWithCaption|individual4.png|caption=This is the caption of the figure}}
}}
 

Latest revision as of 13:42, 2 May 2013

$ \newcommand{\argmin}[1]{ \mathop{\rm arg} \mathop{\rm min}\limits_{#1} } \newcommand{\nominal}[1]{#1^{\star}} \newcommand{\psis}{\psi{^\star}} \newcommand{\phis}{\phi{^\star}} \newcommand{\hpsi}{\hat{\psi}} \newcommand{\hphi}{\hat{\phi}} \newcommand{\teps}{\varepsilon} \newcommand{\limite}[2]{\mathop{\longrightarrow}\limits_{\mathrm{#1}}^{\mathrm{#2}}} \newcommand{\DDt}[1]{\partial^2_\theta #1} \def\bu{\boldsymbol{u}} \def\bt{\boldsymbol{t}} \def\bT{\boldsymbol{T}} \def\by{\boldsymbol{y}} \def\bx{\boldsymbol{x}} \def\bc{\boldsymbol{c}} \def\bw{\boldsymbol{w}} \def\bz{\boldsymbol{z}} \def\bpsi{\boldsymbol{\psi}} \def\bbeta{\beta} \def\aref{a^\star} \def\kref{k^\star} \def\model{M} \def\hmodel{m} \def\mmodel{\mu} \def\imodel{H} \def\like{\cal L} \def\thmle{\hat{\theta}} \def\ofim{I^{\rm obs}} \def\efim{I^{\star}} \def\Imax{\rm Imax} \def\probit{\rm probit} \def\vt{t} \def\id{\rm Id} \def\teta{\tilde{\eta}} \newcommand{\eqdef}{\mathop{=}\limits^{\mathrm{def}}} \newcommand{\deriv}[1]{\frac{d}{dt}#1(t)} \newcommand{\pred}[1]{\tilde{#1}} \def\phis{\phi{^\star}} \def\hphi{\tilde{\phi}} \def\hw{\tilde{w}} \def\hpsi{\tilde{\psi}} \def\hatpsi{\hat{\psi}} \def\hatphi{\hat{\phi}} \def\psis{\psi{^\star}} \def\transy{u} \def\psipop{\psi_{\rm pop}} \newcommand{\psigr}[1]{\hat{\bpsi}_{#1}} \newcommand{\Vgr}[1]{\hat{V}_{#1}} %\def\pmacro{\mathcrm{p}} %\def\pmacro{\verb!p!} \def\pmacro{\text{p}} \def\py{\pmacro} \def\pt{\pmacro} \def\pc{\pmacro} \def\pu{\pmacro} \def\pyi{\pmacro} \def\pyj{\pmacro} \def\ppsi{\pmacro} \def\ppsii{\pmacro} \def\pcpsith{\pmacro} \def\pth{\pmacro} \def\pypsi{\pmacro} \def\pcypsi{\pmacro} \def\ppsic{\pmacro} \def\pcpsic{\pmacro} \def\pypsic{\pmacro} \def\pypsit{\pmacro} \def\pcypsit{\pmacro} \def\pypsiu{\pmacro} \def\pcypsiu{\pmacro} \def\pypsith{\pmacro} \def\pypsithcut{\pmacro} \def\pypsithc{\pmacro} \def\pcypsiut{\pmacro} \def\pcpsithc{\pmacro} \def\pcthy{\pmacro} \def\pyth{\pmacro} \def\pcpsiy{\pmacro} \def\pz{\pmacro} \def\pw{\pmacro} \def\pcwz{\pmacro} \def\pw{\pmacro} \def\pcyipsii{\pmacro} \def\pyipsii{\pmacro} \def\pypsiij{\pmacro} \def\pyipsiONE{\pmacro} \def\ptypsiij{\pmacro} \def\pcyzipsii{\pmacro} \def\pczipsii{\pmacro} \def\pcyizpsii{\pmacro} \def\pcyijzpsii{\pmacro} \def\pcyiONEzpsii{\pmacro} \def\pcypsiz{\pmacro} \def\pccypsiz{\pmacro} \def\pypsiz{\pmacro} \def\pcpsiz{\pmacro} \def\peps{\pmacro} \def\psig{\psi} \def\psigprime{\psig^{\prime}} \def\psigiprime{\psig_i^{\prime}} \def\psigk{\psig^{(k)}} \def\psigki{\psig_i^{(k)}} \def\psigkun{\psig^{(k+1)}} \def\psigkuni{\psig_i^{(k+1)}} \def\psigi{\psig_i} \def\psigil{\psig_{i,\ell}} \def\phig{\phi} \def\phigi{\phig_i} \def\phigil{\phig_{i,\ell}} \def\etagi{\eta_i} \def\IIV{\Omega} \def\thetag{\theta} \def\thetagk{\theta_k} \def\thetagkun{\theta_{k+1}} \def\thetagkunm{\theta_{k-1}} \def\sgk{s_{k}} \def\sgkun{s_{k+1}} \def\yg{y} \def\xg{x} \def\qx{p_x} \def\qy{p_y} \def\qt{p_t} \def\qc{p_c} \def\qu{p_u} \def\qyi{p_{y_i}} \def\qyj{p_{y_j}} \def\qpsi{p_{\psi}} \def\qpsii{p_{\psi_i}} \def\qcpsith{p_{\psi|\theta}} \def\qth{p_{\theta}} \def\qypsi{p_{y,\psi}} \def\qcypsi{p_{y|\psi}} \def\qpsic{p_{\psi,c}} \def\qcpsic{p_{\psi|c}} \def\qypsic{p_{y,\psi,c}} \def\qypsit{p_{y,\psi,t}} \def\qcypsit{p_{y|\psi,t}} \def\qypsiu{p_{y,\psi,u}} \def\qcypsiu{p_{y|\psi,u}} \def\qypsith{p_{y,\psi,\theta}} \def\qypsithcut{p_{y,\psi,\theta,c,u,t}} \def\qypsithc{p_{y,\psi,\theta,c}} \def\qcypsiut{p_{y|\psi,u,t}} \def\qcpsithc{p_{\psi|\theta,c}} \def\qcthy{p_{\theta | y}} \def\qyth{p_{y,\theta}} \def\qcpsiy{p_{\psi|y}} \def\qz{p_z} \def\qw{p_w} \def\qcwz{p_{w|z}} \def\qw{p_w} \def\qcyipsii{p_{y_i|\psi_i}} \def\qyipsii{p_{y_i,\psi_i}} \def\qypsiij{p_{y_{ij}|\psi_{i}}} \def\qyipsi1{p_{y_{i1}|\psi_{i}}} \def\qtypsiij{p_{\transy(y_{ij})|\psi_{i}}} \def\qcyzipsii{p_{z_i,y_i|\psi_i}} \def\qczipsii{p_{z_i|\psi_i}} \def\qcyizpsii{p_{y_i|z_i,\psi_i}} \def\qcyijzpsii{p_{y_{ij}|z_{ij},\psi_i}} \def\qcyi1zpsii{p_{y_{i1}|z_{i1},\psi_i}} \def\qcypsiz{p_{y,\psi|z}} \def\qccypsiz{p_{y|\psi,z}} \def\qypsiz{p_{y,\psi,z}} \def\qcpsiz{p_{\psi|z}} \def\qeps{p_{\teps}} \def\neta{n_\eta} \def\ncov{M} \def\npsi{n_\psig} \def\beeta{\eta} \def\logit{\rm logit} \def\transy{u} \def\so{O} \newcommand{\prob}[1]{ \mathbb{P}\left(#1\right)} \newcommand{\probs}[2]{ \mathbb{P}_{#1}\left(#2\right)} \newcommand{\esp}[1]{\mathbb{E}\left(#1\right)} \newcommand{\esps}[2]{\mathbb{E}_{#1}\left(#2\right)} \newcommand{\var}[1]{\mbox{Var}\left(#1\right)} \newcommand{\vars}[2]{\mbox{Var}_{#1}\left(#2\right)} \newcommand{\std}[1]{\mbox{sd}\left(#1\right)} \newcommand{\stds}[2]{\mbox{sd}_{#1}\left(#2\right)} \newcommand{\corr}[1]{\mbox{Corr}\left(#1\right)} \newcommand{\Rset}{\mbox{$\mathbb{R}$}} \newcommand{\Yr}{\mbox{$\mathcal{Y}$}} \newcommand{\teps}{\varepsilon} \newcommand{\like}{\cal L} \newcommand{\logit}{\rm logit} \newcommand{\transy}{u} \newcommand{\repy}{y^{(r)}} \newcommand{\brepy}{\boldsymbol{y}^{(r)}} \newcommand{\vari}[3]{#1_{#2}^{{#3}}} \newcommand{\dA}[2]{\dot{#1}_{#2}(t)} \newcommand{\nitc}{N} \newcommand{\itc}{I} \newcommand{\vl}{V} \newcommand{tstart}{t_{start}} \newcommand{tstop}{t_{stop}} \newcommand{\one}{\mathbb{1}} \newcommand{\hazard}{h} \newcommand{\cumhaz}{H} \newcommand{\std}[1]{\mbox{sd}\left(#1\right)} \newcommand{\eqdef}{\mathop{=}\limits^{\mathrm{def}}} \def\cpop{c_{\rm pop}} \def\Vpop{V_{\rm pop}} \def\iparam{l} \newcommand{\trcov}[1]{#1} \def\mlxtran{\mathbb{MLXtran} } \def\monolix{\Bbb{Monolix}} $

Introduction

A model built for real-world applications can involve various types of variable, such as measurements, individual and population parameters, covariates, design, etc. The model allows us to represent relationships between these variables.

If we consider things from a probabilistic point of view, some of the variables will be random, so the model becomes a probabilistic one, representing the joint distribution of these random variables.

Defining a model therefore means defining a joint distribution. The hierarchical structure of the model will then allow it to be decomposed into submodels, i.e., the joint distribution decomposed into a product of conditional distributions.

Tasks such as estimation, model selection, simulation and optimization can then be expressed as specific ways of using this probability distribution.


- A model is a joint probability distribution.

- A submodel is a conditional distribution derived from this joint distribution.

- A task is a specific use of this distribution.


We will illustrate this approach starting with a very simple example that we will gradually make more sophisticated. Then we will see in various situations what can be defined as the model and what its inputs are.



An illustrative example


A model for the observations of a single individual

Let $y=(y_j, 1\leq j \leq n)$ be a vector of observations obtained at times $\vt=(t_j, 1\leq j \leq n)$. We consider that the $y_j$ are random variables and we denote $\qy$ the distribution (or pdf) of $y$. If we assume a parametric model, then there exists a vector of parameters $\psi$ that completely define $y$.

We can then explicitly represent this dependency with respect to $\bpsi$ by writing $\qy( \, \cdot \, ; \psi)$ for the pdf of $y$.

If we wish to be even more precise, we can even make it clear that this distribution is defined for a given design, i.e., a given vector of times $\vt$, and write $ \qy(\, \cdot \, ; \psi,\vt)$ instead.

By convention, the variables which are before the symbol ";" are random variables. Those that are after the ";" are non-random parameters or variables. When there is no risk of confusion, the non-random terms can be left out of the notation.


-In this context, the model is the distribution of the observations $\qy(\, \cdot \, ; \psi,\vt)$.
-The inputs of the model are the parameters $\psi$ and the design $\vt$.


Man02.jpg
Example:


500 mg of a drug is given by intravenous bolus to a patient at time 0. We assume that the evolution of the plasmatic concentration of the drug over time is described by the pharmacokinetic (PK) model

\( f(t;V,k) = \displaystyle{ \frac{500}{V} }e^{-k \, t} , \)

where $V$ is the volume of distribution and $k$ the elimination rate constant. The concentration is measured at times $(t_j, 1\leq j \leq n)$ with additive residual errors:

\( y_j = f(t_j;V,k) + e_j , \quad 1 \leq j \leq n . \)

Assuming that the residual errors $(e_j)$ are independent and normally distributed with constant variance $a^2$, the observed values $(y_j)$ are also independent random variables and

\( y_j \sim {\cal N} \left( f(t_j ; V,k) , a^2 \right), \quad 1 \leq j \leq n. \)
(1.4)

Here, the vector of parameters $\psi$ is $(V,k,a)$. $V$ and $k$ are the PK parameters for the structural PK model and $a$ the residual error parameter. As the $y_j$ are independent, the joint distribution of $y$ is the product of their marginal distributions:

\( \py(y ; \psi,\vt) = \prod_{j=1}^n \pyj(y_j ; \psi,t_j) , \)

where $\qyj$ is the normal distribution defined in (1.4).


Man02.jpg
Example:


500 mg of a drug is given by intravenous bolus to a patient at time 0. We assume that the evolution of the plasmatic concentration of the drug over time is described by the pharmacokinetic (PK) model

\( f(t;V,k) = \displaystyle{ \frac{500}{V} }e^{-k \, t} , \)

where $V$ is the volume of distribution and $k$ the elimination rate constant. The concentration is measured at times $(t_j, 1\leq j \leq n)$ with additive residual errors:

\( y_j = f(t_j;V,k) + e_j , \quad 1 \leq j \leq n . \)

Assuming that the residual errors $(e_j)$ are independent and normally distributed with constant variance $a^2$, the observed values $(y_j)$ are also independent random variables and

\( y_j \sim {\cal N} \left( f(t_j ; V,k) , a^2 \right), \quad 1 \leq j \leq n. \)
(1.4)

Here, the vector of parameters $\psi$ is $(V,k,a)$. $V$ and $k$ are the PK parameters for the structural PK model and $a$ the residual error parameter. As the $y_j$ are independent, the joint distribution of $y$ is the product of their marginal distributions:

\( \py(y ; \psi,\vt) = \prod_{j=1}^n \pyj(y_j ; \psi,t_j) , \)

where $\qyj$ is the normal distribution defined in (1.4).


Man02.jpg
Example:


500 mg of a drug is given by intravenous bolus to a patient at time 0. We assume that the evolution of the plasmatic concentration of the drug over time is described by the pharmacokinetic (PK) model

\( f(t;V,k) = \displaystyle{ \frac{500}{V} }e^{-k \, t} , \)

where $V$ is the volume of distribution and $k$ the elimination rate constant. The concentration is measured at times $(t_j, 1\leq j \leq n)$ with additive residual errors:

\( y_j = f(t_j;V,k) + e_j , \quad 1 \leq j \leq n . \)

Assuming that the residual errors $(e_j)$ are independent and normally distributed with constant variance $a^2$, the observed values $(y_j)$ are also independent random variables and

\( y_j \sim {\cal N} \left( f(t_j ; V,k) , a^2 \right), \quad 1 \leq j \leq n. \)
(1.4)

Here, the vector of parameters $\psi$ is $(V,k,a)$. $V$ and $k$ are the PK parameters for the structural PK model and $a$ the residual error parameter. As the $y_j$ are independent, the joint distribution of $y$ is the product of their marginal distributions:

\( \py(y ; \psi,\vt) = \prod_{j=1}^n \pyj(y_j ; \psi,t_j) , \)

where $\qyj$ is the normal distribution defined in (1.4).


A model for several individuals

Now let us move to $N$ individuals. It is natural to suppose that each is represented by the same basic parametric model, but not necessarily the exact same parameter values. Thus, individual $i$ has parameters $\psi_i$. If we consider that individuals are randomly selected from the population, then we can treat the $\psi_i$ as if they were random vectors. As both $\by=(y_i , 1\leq i \leq N)$ and $\bpsi=(\psi_i , 1\leq i \leq N)$ are random, the model is now a joint distribution: $\qypsi$. Using basic probability, this can be written as:

\( \pypsi(\by,\bpsi) = \pcypsi(\by | \bpsi) \, \ppsi(\bpsi) .\)

If $\qpsi$ is a parametric distribution that depends on a vector $\theta$ of population parameters and a set of individual covariates $\bc=(c_i , 1\leq i \leq N)$, this dependence can be made explicit by writing $\qpsi(\, \cdot \,;\theta,\bc)$ for the pdf of $\bpsi$. Each $i$ has a potentially unique set of times $t_i=(t_{i1},\ldots,t_{i \ \!\!n_i})$ in the design, and $n_i$ can be different for each individual.


- In this context, the model is the joint distribution of the observations and the individual parameters:

\( \pypsi(\by , \bpsi; \theta, \bc,\bt)=\pcypsi(\by | \bpsi;\bt) \, \ppsi(\bpsi;\theta,\bc) . \)

- The inputs of the model are the population parameters $\theta$, the individual covariates $\bc=(c_i , 1\leq i \leq N)$ and the measurement times

$\bt=(t_{ij} ,\ 1\leq i \leq N ,\ 1\leq j \leq n_i)$.</div> </div> <div class="noprint" style="float:none; border-left: 14px solid #DABDAB; border-radius: 0.5em 0.5em 0.5em 0.5em; background-color:beige; padding:12px; margin-left: 2%; margin-right: 8%;"> <div style="text-align: left; padding-left: 0.5em; padding-top:0.5em; font-size:13pt;font-family:Segoe UI;font-weight:bold">Remarks</div><br> <div style="text-align: left; padding-left: 1.2em; padding-bottom:0.7em">Approximating the fraction $\hatpsi/\widehat{\rm s.e}(\hatpsi_k)$ by the normal distribution is a "good" approximation only when the number of observations $n$ is large. A better approximation should be used for small $n$. In the model $y_j = f(t_j ; \phi) + a\teps_j$, the distribution of $\hat{a}^2$ can be approximated by a chi-square distribution with $(n-d_\phi)$ degrees of freedom, where $d_\phi$ is the dimension of $\phi$. The quantiles of the normal distribution can then be replaced by those of a Student's $t$-distribution with $(n-d_\phi)$ degrees of freedom.</div> </div> <br> =='"`UNIQ--h-4--QINU`"' Examples With Equations/Code/Tables == <div class="noprint" style="border-left: 6px solid #C04000; margin-left:4%; margin-right:5%;padding-top:0.8em"> [[Image:man02.jpg|52px|left|top|link=]] <div style="text-align: left; padding-left: 2.5em; font-size:13pt; font-weight:bold"><u>Example 1:</u><span style="font-size:12pt;font-weight:normal"> Poisson model with time varying intensity</span></div> <br> <div style="text-align: left; padding-left: 3%"></div> {| cellspacing="15" cellpadding="15" | style="width:40%" | <div style="text-align: left;font-size:12pt;padding-left:1%;padding-right:2%">\( \begin{array}{c} \psi_i &=& (\alpha_i,\beta_i) \\[0.3cm] \lambda(t,\psi_i) &=& \alpha_i + \beta_i\,t \\[0.3cm] \prob{y_{ij}=k} &=& \displaystyle{ \frac{\lambda(t_{ij} , \psi_i)^k}{k!} } e^{-\lambda(t_{ij} , \psi_i)}\\ \end{array}\)</div> | style="width:50%" | <div class="noprint" style="background-color:#EFEFEF; border: 1px solid darkgray; border-radius:1em"> <div style="margin-top:1em;margin-left:1em">[[Image:monolix_icon2.png|left|top|link=]]</div> <div style="text-align: left; padding-left: 4em; margin-top:0.5em; font-family:'courier new'; font-size:14pt; font-weight:bold; color: #0095B6; margin-top:1.2em">MLXTran <span style="font-weight:normal; font-size:13pt"></span></div> <div style="text-align: left; padding-left: 1.2em; font-family:'courier new';margin-bottom:1em">'"`UNIQ--pre-00000000-QINU`"'</div> </div> |} </div> [[:Template:ExampleWithTable1bis]] <div class="noprint" style="margin-left: 2em; border-left: 6px solid #F4C430; marging-left:3.5%; margin-right:3.5%;padding-top:0.8em"> [[Image:Icone4.jpg|50px|left|top]] <div style="text-align: left; padding-left: 2.5em; font-size:14pt"><u>'''Example 1:'''</u><span style="font-size:12pt"> Poisson model with time varying intensity</span></div> <br> <div style="text-align: left; padding-left: 2.5em; font-size:12pt;"></div> {| cellspacing="5" cellpadding="5" | style="width:550px;" | <div style="text-align: left;font-size: 12pt;padding-left:1em;font-family:arial;">\( \begin{array}{c} \psi_i &=& (\alpha_i,\beta_i) \\[0.3cm] \lambda(t,\psi_i) &=& \alpha_i + \beta_i\,t \\[0.3cm] \prob{y_{ij}=k} &=& \displaystyle{ \frac{\lambda(t_{ij} , \psi_i)^k}{k!} } e^{-\lambda(t_{ij} , \psi_i)}\\ \end{array}\)</div> | style="width:550px;" | <div class="noprint" style="background-color:#EFEFEF; border: 1px solid darkgray; border-radius:1em"> <div style="margin-top:1em;margin-left:1em">[[Image:monolix_icon2.png|left|top|link=]]</div> <div style="text-align: left; padding-left: 4em; margin-top:0.5em; font-family:'courier new'; font-size:14pt; font-weight:bold; color: #0095B6; margin-top:1.2em">MLXTran <span style="font-weight:normal; font-size:13pt"></span></div> <div style="text-align: left; padding-left: 1.2em; font-family:'courier new';margin-bottom:1em">'"`UNIQ--pre-00000001-QINU`"'</div> </div> |} </div> [[:Template:ExampleWithTable 4]] {| cellpadding="5" cellspacing="5" | style="width:550px;" | <div class="noprint" style="background-color:#EFEFEF; border: 1px solid darkgray; border-radius:1em; margin-left:1%; margin-right:1%"> :<div style="margin-top:1em">[[Image:Rstudio.png|33px|left|top]]</div> <div style="text-align: left; padding-left: 4em; font-family:'courier new'; font-size:14pt; font-weight:bold; color: #007FFF; margin-top:1em">R <span style="font-size:13pt; font-weight:normal"> </span></div><br> <div style="text-align: left; padding-left: 1.2em; font-family:'courier new';margin-bottom:1em">'"`UNIQ--pre-00000002-QINU`"'</div> </div> | style="width:550px;" | :Here are the parameter estimation results: <div class="noprint" style="background-color:#EFEFEF; border: 1px solid darkgray; border-radius:1em; margin-left:1%; margin-right:1%"> <div style="text-align: left; margin-left: 1.1em; font-family:'courier new'; color:blue;margin-top:0.9em; margin-bottom:0.9em">'"`UNIQ--pre-00000003-QINU`"'</div> </div> |} =='"`UNIQ--h-5--QINU`"' Equations == Here are some examples of these various types of data: * Continuous data with a normal distribution: <div class="noprint" style="float:none; border:1px solid #50C878;border-radius:1em; background-color:#ECFCF4;padding:14px;margin-left: 25%; margin-right: 25%"> <div style="text-align:center; font-size:13pt"> \(y_{ij} \sim {\cal N}\left(f(t_{ij},\psi_i),\, g^2(t_{ij},\psi_i)\right)\) </div> </div> :Here, $\lambda(t_{ij},\psi_i)=\left(f(t_{ij},\psi_i),\,g(t_{ij},\psi_i)\right)$, where $f(t_{ij},\psi_i)$ is the mean and $g(t_{ij},\psi_i)$ the standard deviation of $y_{ij}$. * Categorical data with a Bernoulli distribution: <div class="noprint" style="float:none; border:1px solid #50C878;border-radius:1em; background-color:#ECFCF4;padding:14px;margin-left: 25%; margin-right: 25%"> <div style="text-align:center; font-size:13pt">\( y_{ij} \sim {\cal B}\left(\lambda(t_{ij},\psi_i)\right) \) </div> </div> :Here, $\lambda(t_{ij},\psi_i)$ is the probability that $y_{ij}$ takes the value 1.


\( y_{ij} \sim {\cal N}\left(f(t_{ij},\psi_i),\, g^2(t_{ij},\psi_i)\right) \)
(2.1)


\(y_{ij} \sim {\cal N}\left(f(t_{ij},\psi_i),\, g^2(t_{ij},\psi_i)\right)\)

\( \like(\theta ; \psi_1,\psi_2,\ldots, \psi_N) \ \ \eqdef \ \ \prod_{i=1}^{N}\ppsii(\psi_i ; c_i , \theta). \)


\( \like(\theta ; \psi_1,\psi_2,\ldots, \psi_N) \ \ \eqdef \ \ \prod_{i=1}^{N}\ppsii(\psi_i ; c_i , \theta). \)


\( {\like}(\theta ; \psi_1,\psi_2,\ldots, \psi_N) \ \ \eqdef \ \ \prod_{i=1}^{N}\ppsii(\psi_i ; c_i , \theta). \)


{{{2}}}
This is the caption of the figure