Learning to Solve the Constrained Most Probable Explanation Task in Probabilistic Graphical Models

Research output: Contribution to journalConference articlepeer-review

Abstract

We propose a self-supervised learning approach for solving the following constrained optimization task in log-linear models or Markov networks. Let f and g be two log-linear models defined over the sets X and Y of random variables respectively. Given an assignment x to all variables in X (evidence) and a real number q, the constrained most-probable explanation (CMPE) task seeks to find an assignment y to all variables in Y such that f(x, y) is maximized and g(x, y) ≤ q. In our proposed self-supervised approach, given assignments x to X (data), we train a deep neural network that learns to output near-optimal solutions to the CMPE problem without requiring access to any pre-computed solutions. The key idea in our approach is to use first principles and approximate inference methods for CMPE to derive novel loss functions that seek to push infeasible solutions towards feasible ones and feasible solutions towards optimal ones. We analyze the properties of our proposed method and experimentally demonstrate its efficacy on several benchmark problems.

Original languageEnglish (US)
Pages (from-to)2791-2799
Number of pages9
JournalProceedings of Machine Learning Research
Volume238
StatePublished - 2024
Externally publishedYes
Event27th International Conference on Artificial Intelligence and Statistics, AISTATS 2024 - Valencia, Spain
Duration: May 2 2024May 4 2024

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability

Fingerprint

Dive into the research topics of 'Learning to Solve the Constrained Most Probable Explanation Task in Probabilistic Graphical Models'. Together they form a unique fingerprint.

Cite this