Scroll down to explore our projects.
We consider the problem of learning a probability density — representing an underlying belief (of a human or an LLM) — solely from pairwise comparisons. For score-based methods such as diffusion models, we show a theoretical result that allows the problem to be cast as score-based density estimation rather than MLE. This enables the density to be learned as a diffusion model, bridging the gap between preference learning and generative modeling. Some applications:
Prompt LLM with simple comparative questions
Diffusion model trained on comparisons — reflecting LLM's belief — resembles the true data distribution
We show how to learn a probability density as a normalizing flow from preference data. Some applications:
We studied a new type of Bayesian optimization for learning user preferences or knowledge in high-dimensional spaces. The user provides input by selecting the preferred choice using a slider. The method is able to find the minimum of a high-dimensional black-box function, a task that is often infeasible for existing preferential Bayesian optimization frameworks based on pairwise comparisons. Some applications:
Multi-fidelity Bayesian optimization (MFBO) integrates cheaper, lower-fidelity approximations of the objective function into the optimization process. The project introduces rMFBO (robust MFBO), a methodology to make any MFBO algorithm robust to the addition of unreliable information sources. Some applications: