Problems and solution comments

This is comments to the exam. You were expected to have answers with more details than what is provided below, but the idea of what type of answer I wanted.  The problem set  is provided here.

Problem 1a) Value ~ 3.4 N~4e5, code.

Problem 1b) The point I wanted you to make is that VI  generally gives a good match to the mean, but falls short in terms of the uncertainty.  This is well illustrated with the distribution in the example.

Problem 1c) For STK9051 only.   

Problem 2a) Genetic algorithm, key words that I wanted you to detail: Population, Generation, selection - link to objective function, crossing, mutation. 

Problem 2b) Here is an example code.

Problem 2c) The main problem is that the dimension of the parameter changes in the different models. You can solve this using Reversible jump McMC (RJMCMC)

3a) \(l(\mu,\nu|{\bf C,y})=\Sigma_{i=1}^n I(C_i =0)\log[\nu \phi(y_i;\mu,1)]+I(C_i=1 )\log [(1-\nu )\phi(y_i ;-\mu ) ]\)

3b) \(\nu^{t+1}=\Sigma_{i=1}^nP(C_i=0|y_i,\nu^t,\mu^t) /n\)\(\mu^{ t+1}=\Sigma_{i=1}^n[2P(C_i=0|y_i,\nu^t,\mu^t)-1]y_i /n\)

3c) Code.

3d) The missing information leads to larger uncertainty, and correlation between parameter estimates. 

4a) The posterior \(P(C_i=0|y_i,\nu,\mu)\), is as given for the EM algorithm above. When the class label is given. The likelihood of  \(\nu\), is just a bernoulli trial (i.e. a beta distribution), the likelihood of \(\mu \) is just a standard normal, where the sign of the data is changed according to the group it belongs to (i.e. a normal distribution).

4b) Code

4c) 100 is sufficient, for burn in. Effective number of data is: about 120 and 70 for  \(\mu \)  and   \(\nu\)respectively. 

5a)  The prior for    \(\nu\)  is uniform in the unit interval, for   \(\mu_1 \) and   \(\mu_2 \), the prior is improper, proportional to a constant on the  positive and negative numbers respectively.

5b) Code

 

Published Mar. 10, 2021 8:16 PM - Last modified Mar. 16, 2023 2:15 PM