## Grid approximation of the posterior¶

This notebook will walk you through calculating an approximate posterior distribution for the unemployment rate in Newfoundland and Labrador.

Start by simulating the entire population...

Take a sample of 50 individuals.

Recall that our goal is to estimate the relative probability that the unemployment rate is equal to $p$ for all possible values of $p$. For now, we will just consider 11 possible values $p\in(0.1, 0.2, …, 1.0)$.

Remember: to calculate the (proportional) posterior $Pr(p|\mathtt{ssize},\mathtt{samp})$ for a particular point on the grid (e.g. $p=0.3$), we simply calculate the likelihood of the data at $p=0.3$, and our prior probability that $p=0.3$.

Start with the prior, which for now we will just set as uniform. This vector just says that we think each of the 11 possible values of $p$ is equally likely—i.e. that we don't have any a priori belief about unemployment.

Now calculate the likelihood. As we saw in the slides, the likelihood in this situation is the Binomial probability mass function (PMF). R has a function dbinom() that calculates the binomoial PMF.

### Side note: vector calculations in R¶

Note that we did something a little tricky with R there. Normally, dbinom is though of as taking a fixed value for size and prob, and calculating different values of the PMF for different success counts. The support of the binomial distribution is all counts between zero and the sample size.

But R is good with vectors. When we asked it for dbinom(n_unemp,size=ssize,prob=grid) the first two arguments (n_unemp and ssize) were both single numbers, while the third argument (grid) was a vector as big as our grid. What R did in this case was to construct n_grid different binomial distributions, one for each point on our grid, and calculate the probability of seeing n_unemp successes for that particular distribution.

We can visualize this by plotting the binomial distribution for each value of $p$ along our grid. The large dot is the sample we observe.

### Back to calculating the posterior¶

Now that we have a prior and likelhood, we can just multiply them together and normalize them to get our posterior grid approximation:

## Increassing the size of our grid (and quality of the approximation)¶

That looks accurate enough, but pretty sparse. Let's repeat all that but with n_grid=100.

## Playing with the prior¶

How does this change if we alter the prior probability for $p$?

First, let's pretend that we are absolutely certain that the unemployment rate is no higher than 20%, and we want to enforce that by building it into our model.

Apply that extreme prior to our data:

As one last example of an extreme prior, pretend we believe very deeply in our hearts that the true unemployment rate must be somewhere close to 100%. As very bad researchers, we could encode that belief into our model using a prior from a distribution called a 'beta' distribution. (The beta distribution with less extreme parameters is a very common prior for models like this).