$35
Generalized linear model: Bioassay with Metropolis
Metropolis algorithm: Replicate the computations for the bioassay example of section 3.7 in BDA3 using the Metropolis algorithm. The Metropolis algorithm is described in BDA3 Chapter 11.2. More information on the bioassay data can be found in Section 3.7 in BDA3.
1. Implement the Metropolis algorithm as an R function for the bioassay data. Use the
Gaussian prior
, where µ and .
a) Start by implementing a function called density_ratio to compute the density ratio function, r in Eq. (11.1) in BDA3. Below is an example on how the function should work. You can test the function using markmyassignment.
library(bsda) data("bioassay")
density_ratio(alpha_propose = 1.89, alpha_previous = 0.374,
beta_propose = 24.76, beta_previous = 20.04, x = bioassay$x, y = bioassay$y, n = bioassay$n)
## [1] 1.305179
density_ratio(alpha_propose = 0.374, alpha_previous = 1.89,
beta_propose = 20.04, beta_previous = 24.76, x = bioassay$x, y = bioassay$y, n = bioassay$n)
## [1] 0.7661784
Hint! Compute with log-densities. Reasons are explained on page 261 of BDA3.
Remember that p1/p0 = exp(log(p1) − log(p0)). For your convenience we have provided functions that will evaluate the log-likelihood for given α and β (see bioassaylp() in the bsda package). Notice that you still need to add the prior yourself and remember the unnormalized log posterior is simply the sum of loglikelihood and log-prior. For evaluating the log of the Gaussian prior you can use the function dmvnorm from package bsda.
b) Now implement a function called Metropolis_bioassay() which implements the Metropolis algorithm using the density_ratio().
Hint! Use a simple (normal) proposal distribution. Example proposals are α∗ ∼ N(αt−1,σ = 1) and β∗ ∼ N(βt−1,σ = 5). There is no need to try to nd optimal proposal but test some di erent values for the jump scale (σ). Remember to report the one you used. E cient proposals are dicussed in BDA3 p. 295 297 (not part of the course). In real-life a pre-run could be made with an automatic adaptive control to adapt the proposal distribution.
2. Include in the report the following:
a) Describe in your own words in one paragraph the basic idea of the Metropolis algorithm (see BDA3 Section 11.2).
b) The proposal distribution (related to jumping rule) you used. Describe brie y in words how you chose the nal proposal distribution you used for the reported results.
c) The initial points of your Metropolis chains (or the explicit mechanism for generating them).
d) Report the chain length or the number of iterations for each chain. Run the simulations long enough for approximate convergence (see BDA Section 11.4).
e) Report the warm-up length (see BDA Section 11.4).
f) The number of Metropolis chains used. It is important that multiple Metropolis chains are run for evaluating convergence (see BDA Section 11.4, and lecture
5.2).
g) Plot all chains for α in a single line-plot. Overlapping the chains in this way helps in visually assessing whether chains have converged or not.
h) Do the same for β.
3. In complex scenarios, visual assessment is not su cient and Rb is a more robust indicator of convergence of the Markov chains. Use Rb for convergence analysis. You can either use Eq. (11.4) in BDA3 or the more recent version described here. You should specify which Rb you used. In R the best choice is to use function Rhat from package rstan. Remember to remove the warm-up samples before computing Rb. Report the Rb values for α and β separately. Report the values for the proposal distribution you nally used.
a ) Describe brie y in your own words the basic idea of Rb and how to to interpret the obtained Rb values.
b ) Tell whether you obtained good Rb with rst try, or whether you needed to run more iterations or how did you modify the proposal distribution.
4. Plot the draws for α and β (scatter plot) and include this plot in your report. You can compare the results to Figure 3.3b in BDA3 to verify that your code gives sensible results. Notice though that the results in Figure 3.3b are generated from posterior with a uniform prior, so even when if your algorithm works perfectly, the results will look slightly di erent (although fairly similar).