\item Logistic regression: $g(v)=\log\left(\frac{v}{1-v}\right)$, for instance for boolean values,
\item Poission regression: $g(v)=\log(v)$, for instance for discrete variables.
\end{itemize}
\subsection{Penalized Regression}
When the number of variables is large, e.g, when the number of explicative variable is above the number of observations, if $p >> n$ ($p$: the number of explicative variable, $n$ is the number of observations), we cannot estimate the parameters.
In order to estimate the parameters, we can use penalties (additional terms).
Lasso regression, Elastic Net, etc.
\subsection{Simple Linear Model}
\begin{align*}
\Y&= \X&\beta& + &\varepsilon.\\
n \times 1 & n \times 2 & 2 \times 1 & + & n \times 1 \\
This formula comes from the orthogonal projection of $\Y$ on the subspace define by the explicative variables $\X$
$\X\hat{\beta}$ is the closest point to $\Y$ in the subspace generated by $\X$.
If $H$ is the projection matrix of the subspace generated by $\X$, $X\Y$ is the projection on $\Y$ on this subspace, that corresponds to $\X\hat{\beta}$.