Homepage > MCMC

Bayesian Models

Bayesian uses the concept of prior belief about the parameter $ \theta$ of interest. Then the uncertainty of $ \theta$ changes according to the data $ \mathcal{D}$. Here Bayesian interprets $ \theta$ as a random variable, and the prior belief is given in the form of probability density $ \pi(\theta)$ of $ \theta$. In a Bayesian model we will investigates the postrior density $ \pi(\theta\:\vert\:\mathcal{D})$ of $ \theta$.

Conditional probability. Let $ A$ and $ B$ be two events where $ P(B) \neq 0$. Then the conditional probability of $ A$ given $ B$ can be defined as

$\displaystyle P(A\vert B) = \frac{P(A\cap B)}{P(B)}.

The idea of ``conditioning'' is that ``if we have known that $ B$ has occurred, the sample space should have become $ B$.'' It is often the case that one can use $ P(A\vert B)$ to find

$\displaystyle P(A\cap B) = P(A\vert B)P(B).

Law of total probability. Let $ A$ and $ B$ be two events. Then we can write the probability $ P(A)$ as

$\displaystyle P(A) = P(A \cap B) + P(A \cap B^c)
= P(A \vert B) P(B) + P(A \vert B^c) P(B^c).

In general, suppose that we have a sequence $ B_1, B_2, \ldots, B_n$ of mutually disjoint events satisfying $ \bigcup_{i=1}^n B_i = \Omega$, where ``mutual disjointness'' means that $ B_i \cap B_j = \emptyset$ for all $ i \neq j$. (The events $ B_1, B_2, \ldots, B_n$ are called ``a partition of $ \Omega$.'') Then for any event $ A$ we have

$\displaystyle P(A) = \sum_{i = 1}^n P(A \cap B_i)
= \sum_{i = 1}^n P(A\vert B_i)P(B_i).

Bayes rule. Let $ A$ and $ B_1, \ldots, B_n$ be events such that the $ B_i$'s are mutually disjoint, $ \bigcup_{i=1}^n B_i = \Omega$ and $ P(B_i) > 0$ for all $ i$. Then

$\displaystyle P(B_j\vert A) = \frac{P(A \cap B_j)}{P(A)}
= \frac{P(A\vert B_j)P(B_j)}{\sum_{i=1}^n P(A\vert B_i)P(B_i)}

is called Bayes rule.

Concept of Independence. Intuitively we would like to say that $ A$ and $ B$ are independent if knowing about one event give us no information about another. That is, $ P(A\vert B) = P(A)$ and $ P(B\vert A) = P(B)$. We say $ A$ and $ B$ are independent if

$\displaystyle P(A\cap B) = P(A)P(B).

This definition is symmetric in $ A$ and $ B$, and allows $ P(A)$ and/or $ P(B)$ to be 0. Furthermore, a collection of events $ A_1, A_2, \ldots, A_n$ is said to be mutually independent if it satisfies

$\displaystyle P(A_{i_1}\cap\cdots\cap A_{i_m}) = P(A_{i_1}) P(A_{i_2}) \cdots

for any subcollection $ A_{i_1}, A_{i_2}, \ldots, A_{i_m}$.

© TTU Mathematics