Difference between revisions of "TestMarc2"
m |
m |
||
Line 32: | Line 32: | ||
Let $y=(y_j, 1\leq j \leq n)$ be a vector of ''observations'' obtained at times $ t=(t_j, 1\leq j \leq n)$. We consider that the $y_j$ are random variables and we denote $p_y$ the distribution (or pdf) of $y$. If we assume a ''parametric model'', then there exists a vector of parameters $\psi$ that completely define $y$. | Let $y=(y_j, 1\leq j \leq n)$ be a vector of ''observations'' obtained at times $ t=(t_j, 1\leq j \leq n)$. We consider that the $y_j$ are random variables and we denote $p_y$ the distribution (or pdf) of $y$. If we assume a ''parametric model'', then there exists a vector of parameters $\psi$ that completely define $y$. | ||
− | We can then explicitly represent this dependency with respect to $\ | + | We can then explicitly represent this dependency with respect to ${\bf \psi}$ by writing $p_y( \, \cdot \, ; \psi)$ for the pdf of $y$. |
If we wish to be even more precise, we can even make it clear that this distribution is defined for a given design, i.e., a given vector of times $ t$, and write $ p_y(\, \cdot \, ; \psi, t)$ instead. | If we wish to be even more precise, we can even make it clear that this distribution is defined for a given design, i.e., a given vector of times $ t$, and write $ p_y(\, \cdot \, ; \psi, t)$ instead. | ||
Line 135: | Line 135: | ||
=== A model for several individuals === | === A model for several individuals === | ||
− | Now let us move to $N$ individuals. It is natural to suppose that each is represented by the same basic parametric model, but not necessarily the exact same parameter values. Thus, individual $i$ has parameters $\psi_i$. If we consider that individuals are randomly selected from the population, then we can treat the $\psi_i$ as if they were random vectors. As both $\ | + | Now let us move to $N$ individuals. It is natural to suppose that each is represented by the same basic parametric model, but not necessarily the exact same parameter values. Thus, individual $i$ has parameters $\psi_i$. If we consider that individuals are randomly selected from the population, then we can treat the $\psi_i$ as if they were random vectors. As both ${\bf y}=(y_i , 1\leq i \leq N)$ and ${\bf \psi}=(\psi_i , 1\leq i \leq N)$ are random, the model is now a joint distribution: $p_ypsi$. Using basic probability, this can be written as: |
{{Equation1 | {{Equation1 | ||
|equation=<math> | |equation=<math> | ||
− | {\mathrm{p}}(\ | + | {\mathrm{p}}({\bf y},{\bf \psi}) = \mathrm{p}({\bf y} {{!}} {\bf \psi}) \, \mathrm{p}({\bf \psi}) .</math> }} |
− | If $p_\psi$ is a parametric distribution that depends on a vector $\theta$ of ''population parameters'' and a set of ''individual covariates'' ${\bf c}=(c_i , 1\leq i \leq N)$, this dependence can be made explicit by writing $p_\psi(\, \cdot \,;\theta,{\bf c})$ for the pdf of $\ | + | If $p_\psi$ is a parametric distribution that depends on a vector $\theta$ of ''population parameters'' and a set of ''individual covariates'' ${\bf c}=(c_i , 1\leq i \leq N)$, this dependence can be made explicit by writing $p_\psi(\, \cdot \,;\theta,{\bf c})$ for the pdf of ${\bf \psi}$. |
Each $i$ has a potentially unique set of times $t_i=(t_{i1},\ldots,t_{i \ \!\!n_i})$ in the design, and $n_i$ can be different for each individual. | Each $i$ has a potentially unique set of times $t_i=(t_{i1},\ldots,t_{i \ \!\!n_i})$ in the design, and $n_i$ can be different for each individual. | ||
Line 150: | Line 150: | ||
{{Equation1 | {{Equation1 | ||
− | |equation=<math> {\mathrm{p}}(\ | + | |equation=<math> {\mathrm{p}}({\bf y} , {\bf \psi}; \theta, {\bf c},\bt)=\{\mathrm{p}}({\bf y} {{!}} {\bf \psi};\bt) \, {\mathrm{p}}({\bf \psi};\theta,{\bf c}) . </math>}} |
− | - The inputs of the model are the population parameters $\theta$, the individual covariates $\ | + | - The inputs of the model are the population parameters $\theta$, the individual covariates ${\bf c}=(c_i , 1\leq i \leq N)$ and the measurement times |
:${\bf t}=(t_{ij} ,\ 1\leq i \leq N ,\ 1\leq j \leq n_i)$. | :${\bf t}=(t_{ij} ,\ 1\leq i \leq N ,\ 1\leq j \leq n_i)$. | ||
}} | }} | ||
Line 158: | Line 158: | ||
{{Remarks | {{Remarks | ||
|title=Remarks | |title=Remarks | ||
− | |text= Approximating the fraction $\hat{\psi}/\widehat{\rm s.e}(\hat{\psi}_k)$ by the normal distribution is a "good" approximation only when the number of observations $n$ is large. A better approximation should be used for small $n$. In the model $y_j = f(t_j ; \phi) + a\ | + | |text= Approximating the fraction $\hat{\psi}/\widehat{\rm s.e}(\hat{\psi}_k)$ by the normal distribution is a "good" approximation only when the number of observations $n$ is large. A better approximation should be used for small $n$. In the model $y_j = f(t_j ; \phi) + a\varepsilon_j$, the distribution of $\hat{a}^2$ can be approximated by a chi-square distribution with $(n-d_\phi)$ degrees of freedom, where $d_\phi$ is the dimension of $\phi$. The quantiles of the normal distribution can then be replaced by those of a Student's $t$-distribution with $(n-d_\phi)$ degrees of freedom. |
<!-- %$${\rm CI}(\psi_k) = [\hat{\psi}_k - \widehat{\rm s.e}(\hat{\psi}_k)q((1-\alpha)/2,n-d) , \hat{\psi}_k + \widehat{\rm s.e}(\hat{\psi}_k)q((1+\alpha)/2,n-d)]$$ --> | <!-- %$${\rm CI}(\psi_k) = [\hat{\psi}_k - \widehat{\rm s.e}(\hat{\psi}_k)q((1-\alpha)/2,n-d) , \hat{\psi}_k + \widehat{\rm s.e}(\hat{\psi}_k)q((1+\alpha)/2,n-d)]$$ --> | ||
<!-- %where $q(\alpha,\nu)$ is the quantile of order $\alpha$ of a $t$-distribution with $\nu$ degrees of freedom. --> | <!-- %where $q(\alpha,\nu)$ is the quantile of order $\alpha$ of a $t$-distribution with $\nu$ degrees of freedom. --> | ||
Line 174: | Line 174: | ||
\psi_i &=& (\alpha_i,\beta_i) \\[0.3cm] | \psi_i &=& (\alpha_i,\beta_i) \\[0.3cm] | ||
\lambda(t,\psi_i) &=& \alpha_i + \beta_i\,t \\[0.3cm] | \lambda(t,\psi_i) &=& \alpha_i + \beta_i\,t \\[0.3cm] | ||
− | \ | + | \mathbb{P}(y_{ij}=k) &=& \displaystyle{ \frac{\lambda(t_{ij} , \psi_i)^k}{k!} } e^{-\lambda(t_{ij} , \psi_i)}\\ |
\end{array}</math> | \end{array}</math> | ||
|code= | |code= | ||
Line 201: | Line 201: | ||
\psi_i &=& (\alpha_i,\beta_i) \\[0.3cm] | \psi_i &=& (\alpha_i,\beta_i) \\[0.3cm] | ||
\lambda(t,\psi_i) &=& \alpha_i + \beta_i\,t \\[0.3cm] | \lambda(t,\psi_i) &=& \alpha_i + \beta_i\,t \\[0.3cm] | ||
− | \ | + | \mathbb{P}(y_{ij}=k) &=& \displaystyle{ \frac{\lambda(t_{ij} , \psi_i)^k}{k!} } e^{-\lambda(t_{ij} , \psi_i)}\\ |
\end{array}</math> | \end{array}</math> | ||
|code= | |code= | ||
Line 227: | Line 227: | ||
\psi_i &=& (\alpha_i,\beta_i) \\[0.3cm] | \psi_i &=& (\alpha_i,\beta_i) \\[0.3cm] | ||
\lambda(t,\psi_i) &=& \alpha_i + \beta_i\,t \\[0.3cm] | \lambda(t,\psi_i) &=& \alpha_i + \beta_i\,t \\[0.3cm] | ||
− | \ | + | \mathbb{P}(y_{ij}=k) &=& \displaystyle{ \frac{\lambda(t_{ij} , \psi_i)^k}{k!} } e^{-\lambda(t_{ij} , \psi_i)}\\ |
\end{array}</math> | \end{array}</math> | ||
|code= | |code= | ||
Line 254: | Line 254: | ||
\psi_i &=& (\alpha_i,\beta_i) \\[0.3cm] | \psi_i &=& (\alpha_i,\beta_i) \\[0.3cm] | ||
\lambda(t,\psi_i) &=& \alpha_i + \beta_i\,t \\[0.3cm] | \lambda(t,\psi_i) &=& \alpha_i + \beta_i\,t \\[0.3cm] | ||
− | \ | + | \mathbb{P}(y_{ij}=k) &=& \displaystyle{ \frac{\lambda(t_{ij} , \psi_i)^k}{k!} } e^{-\lambda(t_{ij} , \psi_i)}\\ |
\end{array}</math> | \end{array}</math> | ||
|code= | |code= |
Revision as of 17:33, 26 April 2013
Contents
Introduction
A model built for real-world applications can involve various types of variable, such as measurements, individual and population parameters, covariates, design, etc. The model allows us to represent relationships between these variables.
If we consider things from a probabilistic point of view, some of the variables will be random, so the model becomes a probabilistic one, representing the joint distribution of these random variables.
Defining a model therefore means defining a joint distribution. The hierarchical structure of the model will then allow it to be decomposed into submodels, i.e., the joint distribution decomposed into a product of conditional distributions.
Tasks such as estimation, model selection, simulation and optimization can then be expressed as specific ways of using this probability distribution.
We will illustrate this approach starting with a very simple example that we will gradually make more sophisticated. Then we will see in various situations what can be defined as the model and what its inputs are.
An illustrative example
A model for the observations of a single individual
Let $y=(y_j, 1\leq j \leq n)$ be a vector of observations obtained at times $ t=(t_j, 1\leq j \leq n)$. We consider that the $y_j$ are random variables and we denote $p_y$ the distribution (or pdf) of $y$. If we assume a parametric model, then there exists a vector of parameters $\psi$ that completely define $y$.
We can then explicitly represent this dependency with respect to ${\bf \psi}$ by writing $p_y( \, \cdot \, ; \psi)$ for the pdf of $y$.
If we wish to be even more precise, we can even make it clear that this distribution is defined for a given design, i.e., a given vector of times $ t$, and write $ p_y(\, \cdot \, ; \psi, t)$ instead.
By convention, the variables which are before the symbol ";" are random variables. Those that are after the ";" are non-random parameters or variables. When there is no risk of confusion, the non-random terms can be left out of the notation.