Go Back to Front Page

Selected Projects

Scroll down to explore projects having tangible outcomes.

Preferential Normalizing Flows

Paper, Code

The project introduces a novel method for eliciting probability densities as normalizing flows using only preference data. Some applications:

  • Learning probabilistic models from preference data where the model can be almost any normalizing flow architecture
  • Elicitation by only asking the expert to compare events and choose which one is most probable
  • Probabilistic reward modelling where the learned reward model outputs the probability of an event being the most preferred
Learning the belief density (contour) as normalizing flow (heatmap) from rankings (numbers)
Easy integration with normflows

Projective Preferential Bayesian Optimization

Paper, Code

The project studied a new type of Bayesian optimization for learning user preferences or knowledge in high-dimensional spaces. The user provides input by choosing the preferred option using a slider. The method is able to find the minimum of a high-dimensional black-box function, a task that is often infeasible for existing preferential Bayesian optimization frameworks based on pairwise comparisons. Some applications:

  • Preference learning with an order of magnitude fewer questions than classical pairwise comparison-based methods
  • Expert knowledge elicitation with outcome that can be used to, for example, speed up downstream optimization tasks
Learning expert knowledge over optimal molecules Knowledge is represented as a Gaussian process

Multi-Fidelity Bayesian Optimization with Unreliable Information Sources

Paper, Code

Multi-fidelity Bayesian optimization (MFBO) integrates cheaper, lower-fidelity approximations of the objective function into the optimization process. The project introduces rMFBO (robust MFBO), a methodology to make any MFBO algorithm robust to the addition of unreliable information sources. Some applications:

  • Utilizing information sources of uncertain accuracy to accelerate optimization problems
  • Incorporating human input as an information source while ensuring it doesn't negatively impact optimization
Robustness against unreliable information sources MFBO Example Easy integration with BoTorch