Exercise 1. If $X \in L^1(\Omega, \mathscr{F}, \mathbb{P})$, show that the class
$${\mathbb{E}[X \mid \mathscr{A}]: \mathscr{A} \text { sub } \sigma \text {-algebra of } \mathscr{F}}$$
is Uniformly Integrable.
(1) Show that, for any $\varepsilon>0$, there exists $\delta>0$ such that
$$\mathbb{E}\left[|X| 1_A\right] \leq \varepsilon, \quad \text { whenever } \mathbb{P}[A] \leq \delta .$$
(2) Show the conclusion.

To show that the class ${\mathbb{E}[X \mid \mathscr{A}]: \mathscr{A} \text { is a sub }\sigma\text{-algebra of }\mathscr{F}}$ is uniformly integrable, we need to prove two conditions:

(1) For any $\varepsilon > 0$, there exists $\delta > 0$ such that $\mathbb{E}\left[|X|1_A\right] \leq \varepsilon$ whenever $\mathbb{P}[A] \leq \delta$.

(2) $\lim_{K \to \infty} \sup_{\mathbb{E}[|X|1_A] > K} \mathbb{P}[A] = 0$.

Let’s prove each of these conditions:

(1) For any $\varepsilon > 0$, we need to find a corresponding $\delta > 0$ satisfying the given condition. Since $X \in L^1(\Omega, \mathscr{F}, \mathbb{P})$, we know that $X$ is integrable, i.e., $\mathbb{E}[|X|] < \infty$. By the definition of conditional expectation, we have $\mathbb{E}\left[|X|1_A\right] \leq \mathbb{E}\left[|X|\right]$ for any $A \in \mathscr{F}$.

Now, choose $\delta = \frac{\varepsilon}{\mathbb{E}\left[|X|\right]}$. For any $A \in \mathscr{F}$ such that $\mathbb{P}[A] \leq \delta$, we have $\mathbb{E}\left[|X|1_A\right] \leq \mathbb{E}\left[|X|\right] \leq \delta \mathbb{E}\left[|X|\right] = \varepsilon$. Thus, the condition (1) holds.

(2) We need to show that $\lim_{K \to \infty} \sup_{\mathbb{E}[|X|1_A] > K} \mathbb{P}[A] = 0$. Consider the set $B_K = {A \in \mathscr{F} : \mathbb{E}[|X|1_A] > K}$.

Let $\varepsilon > 0$ be given. By Chebyshev’s inequality, for any $A \in \mathscr{F}$,
$$\mathbb{P}[|X|1_A > K] \leq \frac{\mathbb{E}[|X|1_A]}{K}.$$

Using this inequality, we have $\mathbb{P}[B_K] \leq \frac{\mathbb{E}[|X|1_{B_K}]}{K}$. Since $\mathbb{E}[|X|1_{B_K}] > K$ by definition of $B_K$, we get $\mathbb{P}[B_K] < 1$.

Now, let’s consider the sequence of sets $B_1, B_2, B_3, \ldots$. Since $\mathbb{P}[B_K] < 1$ for all $K$, we have $\mathbb{P}\left[\bigcup_{K=1}^\infty B_K\right] \leq \sum_{K=1}^\infty \mathbb{P}[B_K] < \infty$ by

Exercise 2. Customers arrive in a supermarket as a Poisson process with intensity $N$. There are $N$ aisles in the supermarket and each customer selects one of them at random, independently of the other customers. Let $X_t^N$ denote the proportion of aisles which remain empty by time $t$. Show that
$$X_t^N \rightarrow e^{-t}, \quad \text { in probability as } N \rightarrow \infty \text {. }$$

To show that $X_t^N \rightarrow e^{-t}$ in probability as $N \rightarrow \infty$, we need to prove that for any $\epsilon > 0$, we can find an $N_0$ such that for all $N \geq N_0$, $\mathbb{P}(|X_t^N – e^{-t}| > \epsilon) \rightarrow 0$.

Let’s begin the proof:

Given a fixed time $t$, let $A_i$ be the event that aisle $i$ is empty at time $t$. Then $X_t^N = \frac{1}{N}\sum_{i=1}^{N} 1_{A_i}$, where $1_{A_i}$ is the indicator function of event $A_i$.

Now, consider the probability that a specific aisle $i$ is empty at time $t$. Since the arrivals follow a Poisson process with intensity $N$, the number of arrivals in the time interval $(0, t)$ follows a Poisson distribution with parameter $\lambda = Nt$. The probability that no customer chooses aisle $i$ is given by $\left(1-\frac{1}{N}\right)^{\lambda}$. As $N \rightarrow \infty$, $\lambda \rightarrow \infty$, and we can approximate $\left(1-\frac{1}{N}\right)^{\lambda}$ as $e^{-t}$.

By the law of large numbers, for each $i$, we have $\frac{1}{N}\sum_{i=1}^{N}1_{A_i} \rightarrow \mathbb{E}[1_{A_i}] = \mathbb{P}(A_i)$ almost surely as $N \rightarrow \infty$. In other words, $X_t^N \rightarrow \mathbb{P}(A_i)$ almost surely for each $i$.

Since the events $A_i$ are independent, $\mathbb{P}(A_i)$ is the same for all $i$. Thus, we denote $\mathbb{P}(A_i)$ as $\mathbb{P}(A)$ for simplicity. We have $X_t^N \rightarrow \mathbb{P}(A)$ almost surely for each $i$.

Now, we can use the convergence in probability to conclude that $X_t^N \rightarrow \mathbb{P}(A)$ in probability. By substituting $\mathbb{P}(A)$ with $e^{-t}$, we get $X_t^N \rightarrow e^{-t}$ in probability as $N \rightarrow \infty$.

Finally, to complete the proof, we need to show that for any $\epsilon > 0$, $\mathbb{P}(|X_t^N – e^{-t}| > \epsilon) \rightarrow 0$ as $N \rightarrow \infty$. This follows from the convergence in probability and the fact that $\mathbb{P}(A) = e^{-t}$ is a constant. Thus, for any $\epsilon > 0$, we can find an $N_0$ such that for all $N \geq N_0$, $\mathbb{P}(|X_t^N – e^{-t}| > \epsilon) \rightarrow 0$, which completes the proof.

E-mail: help-assignment@gmail.com  微信:shuxuejun

help-assignment™是一个服务全球中国留学生的专业代写公司