Bayesian machine learning

Contact person: Geir Storvik     
Keywords: Bayesian methodology, Machine learing, neural nets, sparse networks    
Research group: Statistics and Data Science/Bayesian machine learning group
Department of Mathematics

Bayesian approaches to machine learning are of interest both due to the possibility of incorporating knowledge into the learning process and due to the coherent way of taking uncertainty into account. Due to computational constraints, various approximation techniques are typically applied. How much one lose in using such approximations is unclear. Knowledge is usually incorporated through a probabilistic description (a prior). What properties such priors, typically described on latent variables in complex networks, have is largely unknown. Further, priors that are used are typically very general and generally do not consider real knowledge. Neural networks are typically large and complex, but may be simplified through some pruning procedure. An alternative approach is to start with simple models and iteratively add more complex (neural network type) terms. Bayesian versions of such an approach is currently under development.

Methodological research topics:

  • Prior specifications for learning sparse neural networks
  • Combining recent advances within general Markov chain Monte Carlo algorithms and subsampling approaches for computationally efficient Bayesian neural networks
  • Sequential Monte Carlo for Bayesian machine learning
  • Iterative learning from simple (parametric) to flexible (non-parametric/ML-type) modelling in a Bayesian framework
  • Conflict diagnostics and sensitivity analysis in Bayesian machine learning        


External partners:

  • Norwegian Computing Center (NR)