Riemann surface
matlab

2.1 Example Sheet 2: questions

If $Y_i$ are independent Poisson, means $\exp \beta^T x_i, 1 \leq i \leq n$, how would you evaluate $\hat{\beta}$ and its asymptotic covariance matrix? What does ‘scale parameter taken as $1.000^{\prime}$ mean in the corresponding glm output?

$2 *$ If the loglikelihood can be written
$$\ell(\beta)=\left(\beta^T t-\psi(\beta)\right) / \phi \text { where } \phi>0$$
and $t=t(\mathbf{y})$ is a $p$-dimensional vector, show that the covariance matrix of $t(\mathbf{y})$ is
$$\phi\left(\frac{+\partial^2 \psi}{\partial \beta \partial \beta^T}\right)$$
and hence that $\ell(\beta)$ is a strictly concave function of $\beta$. What is the practical application of this result in estimation of $\beta$ ? Illustrate your answer for either the binomial or the Poisson distribution.

Exercise 4.6
We consider the GLM
$$Y_i \stackrel{\text { ind }}{\sim} \text { Poisson, }$$
where
$$g\left(\mu_i\right)=g\left(\mathbb{E}\left(Y_i\right)\right)=\beta_0+\beta_1 x_i$$
$\mu_i=\mathbb{E} Y_i$, with $g(\cdot)=\log (\cdot)$, i.e. the canonical link, for $i \in\left[n_A+n_B\right]$, and $x_i=0$ for $i=1, \ldots, n_A$ and $x_i=1$ for $i=n_A+1, \ldots, n_A+n_B$. We shall show that the likelihood equations (that give us the maximum likelihood estimators for $\beta_0$ and $\beta_1$ ) imply that
\begin{aligned} & \hat{\mu}i=\hat{\beta}_0+\hat{\beta}_1 x_i=\bar{Y}_A:=\frac{1}{n_A} \sum{i=1}^{n_A} Y_i, \quad \text { for } i \leq n_A, \text { and } \ & \hat{\mu}i=\hat{\beta}_0+\hat{\beta}_1 x_i=\bar{Y}_B:=\frac{1}{n_B} \sum{i=n_A+1}^{n_A+n_b} Y_i, \quad \text { for } i>n_A . \end{aligned}
Recall the likelihood equations (4.10), which say that the MLEs $\beta$ of $\beta$ satisfy
\begin{aligned} \frac{\partial \ell}{\partial \beta_j}(\hat{\beta}) & =\sum_{i=1}^n \frac{Y_i-\mu_i(\hat{\beta})}{\operatorname{Var}\left(Y_i\right)} \frac{\partial \mu_i}{\partial \eta_i}(\hat{\beta}) x_{i, j} \ & =0 \end{aligned}
for $j=0,1$. Note that, in the notation of the book, $x_{i, 0}=1$ and $x_{i, 1}=x_i$, where $x_i$ is as above.
We have previously seen that, for the Poisson distribution, $\mathbb{E}\left(Y_i\right)=\operatorname{Var}\left(Y_i\right)=\mu_i$. Furthermore, since $g(\cdot)=\log (\cdot)$, we have that $\mu_i=g^{-1}\left(\eta_i\right)=\exp \left(\eta_i\right)$. Hence, $\frac{\partial \mu_i}{\partial \eta_i}=\exp \left(\eta_i\right)=\mu_i$. Inserting into $(1)$, we getthat the fitted values $\hat{\mu}=\mu_i(\hat{\beta})$ satisfy
\begin{aligned} \frac{\partial \ell}{\partial \beta_0}(\hat{\beta}) & =\sum_{i=1}^{n_A+n_B} \frac{Y_i-\hat{\mu}i}{\operatorname{Var}\left(Y_i\right)} \frac{\partial \mu_i}{\partial \eta_i}(\hat{\beta}) \ & =\sum{i=1}^{n_A+n_B} Y_i-\hat{\mu}i \ & =0 . \end{aligned} Hence, since the fitted values must be the same for all $i \leq n_A$ and for all $i>n_A$ (the values of the x’s are the same), we obtain $$n_A \hat{\mu}_A+n_B \hat{\mu}_B=n_A \bar{Y}_A+n_B \bar{Y}_B$$ where $\hat{\mu}_A$ is the common value of $\hat{\mu}_i$ for $i \leq n_A, \hat{\mu}_B$ is the common value of $\hat{\mu}_i$ for $i>n_A, \bar{Y}_A:=\frac{1}{n_A} \sum{i=1}^{n_A} Y_i$, and $\bar{Y}B:=\frac{1}{n_B} \sum{i=n_A+1}^{n_A+n_B} Y_i$
Similarly, we obtain
\begin{aligned} \frac{\partial \ell}{\partial \beta_1}(\hat{\beta}) & =\sum_{i=1}^{n_A+n_B} \frac{Y_i-\hat{\mu}i}{\operatorname{Var}\left(Y_i\right)} \frac{\partial \mu_i}{\partial \eta_i}(\hat{\beta}) x_i \ & =\sum{i=1}^{n_A+n_B}\left(Y_i-\hat{\mu}i\right) x_i \ & =\sum{i=1}^{n_A^A}\left(Y_i-\hat{\mu}_i\right) \ & =0 . \end{aligned}
The equation implies that $\hat{\mu}_A=\bar{Y}_A$. Combining this with the previous equation, we obtain that $\hat{\mu}_B=\bar{Y}_B$ as well.

E-mail: help-assignment@gmail.com  微信:shuxuejun

help-assignment™是一个服务全球中国留学生的专业代写公司