Go Back to Front Page

Selected Projects

Scroll down to explore projects having tangible outcomes.

Preferential Normalizing Flows

Paper, Code

The project studied a method for learning probability densities as normalizing flows using only preference data. Some applications:

  • Prior elicitation: joint prior distribution from expert comparisons
  • AI alignment: probabilistic reward model from human preferences
  • Density estimation: when the target density cannot be sampled or evaluated, but only relative information is available
Learning the belief density (contour) as normalizing flow (heatmap) from rankings (numbers)
Easy integration with normflows

Projective Preferential Bayesian Optimization

Paper, Code

The project studied a new type of Bayesian optimization for learning user preferences or knowledge in high-dimensional spaces. The user provides input by choosing the preferred option using a slider. The method is able to find the minimum of a high-dimensional black-box function, a task that is often infeasible for existing preferential Bayesian optimization frameworks based on pairwise comparisons. Some applications:

  • Preference learning with an order of magnitude fewer questions than classical pairwise comparison-based methods
  • Expert knowledge elicitation with outcome that can be used to, for example, speed up downstream optimization tasks
Learning expert knowledge over optimal molecules Knowledge is represented as a Gaussian process

Multi-Fidelity Bayesian Optimization with Unreliable Information Sources

Paper, Code

Multi-fidelity Bayesian optimization (MFBO) integrates cheaper, lower-fidelity approximations of the objective function into the optimization process. The project introduces rMFBO (robust MFBO), a methodology to make any MFBO algorithm robust to the addition of unreliable information sources. Some applications:

  • Utilizing information sources of uncertain accuracy to accelerate optimization problems
  • Incorporating human input as an information source while ensuring it doesn't negatively impact optimization
Robustness against unreliable information sources MFBO Example Easy integration with BoTorch