Step 1: Problem Overview:
The problem requires finding the variance of the Maximum Likelihood Estimator (MLE) for the parameter \( \lambda \) of a Poisson distribution. This involves first determining the MLE, denoted as \( \hat{\lambda} \), and then computing its variance.
Step 2: Methodology:
1. Determine the MLE of \(\lambda\). It is a well-known result that \( \hat{\lambda} = \bar{X} \).
2. Calculate the variance of the MLE using variance properties:
\[ \text{Var}(\hat{\lambda}) = \text{Var}(\bar{X}) = \text{Var}\left(\frac{1}{n}\sum X_i\right) = \frac{1}{n^2}\text{Var}\left(\sum X_i\right) \]
3. Since the \(X_i\) are independent, the variance of their sum is the sum of their variances: \(\text{Var}(\sum X_i) = \sum \text{Var}(X_i)\).
4. For a Poisson(\(\lambda\)) distribution, the variance of each \(X_i\) is \( \text{Var}(X_i) = \lambda \).
Step 3: Detailed Solution:
Part 1: Deriving the MLE \( \hat{\lambda} \)
The likelihood function for a sample \(x_1, \dots, x_n\) is given by:
\[ L(\lambda) = \prod_{i=1}^n \frac{e^{-\lambda}\lambda^{x_i}}{x_i!} = \frac{e^{-n\lambda}\lambda^{\sum x_i}}{\prod x_i!} \]
The log-likelihood function is:
\[ l(\lambda) = \ln(L(\lambda)) = -n\lambda + (\sum x_i)\ln(\lambda) - \ln(\prod x_i!) \]
Differentiating with respect to \(\lambda\) and setting the derivative to zero:
\[ \frac{dl}{d\lambda} = -n + \frac{\sum x_i}{\lambda} = 0 \]
\[ \frac{\sum x_i}{\lambda} = n \implies \lambda = \frac{\sum x_i}{n} = \bar{x} \]
Therefore, the MLE is \( \hat{\lambda} = \bar{X} \).
Part 2: Calculating the Variance of \( \hat{\lambda} \)
We want to find \( \text{Var}(\hat{\lambda}) = \text{Var}(\bar{X}) \).
\[ \text{Var}(\bar{X}) = \text{Var}\left(\frac{1}{n} \sum_{i=1}^n X_i\right) \]
Using the property \(\text{Var}(aX) = a^2\text{Var}(X)\):
\[ \text{Var}(\bar{X}) = \frac{1}{n^2} \text{Var}\left(\sum_{i=1}^n X_i\right) \]
Since the observations are independent, the variance of the sum is the sum of the variances:
\[ \text{Var}(\bar{X}) = \frac{1}{n^2} \sum_{i=1}^n \text{Var}(X_i) \]
For a Poisson(\(\lambda\)) distribution, the variance of each observation is \(\text{Var}(X_i) = \lambda\).
\[ \text{Var}(\bar{X}) = \frac{1}{n^2} \sum_{i=1}^n \lambda = \frac{1}{n^2} (n\lambda) = \frac{\lambda}{n} \]
Step 4: Conclusion:
The variance of the MLE \( \hat{\lambda} \) is \( \frac{\lambda}{n} \).