Gaussian Process Predictions with Uncertain Inputs Enabled by Uncertainty-Tracking Processor Architectures
J. Petangoda, C. Samarakoon, and P. Stanley-Marbell, In NeurIPS 2024 Workshop Machine Learning with new Compute Paradigms, 2024.
Abstract
Gaussian Processes (GPs) are theoretically-grounded models that capture both aleatoric and epistemic uncertainty, but, the well-known solutions of the GP predictive posterior distribution apply only for deterministic inputs. If the input is uncertain, closed-form solutions aren’t generally available and approximation schemes such as moment-matching and Monte Carlo simulation must be used. Moment-matching is only available under restricted conditions on the input distribution and the GP prior and will miss the nuances of the predictive posterior distribution; Monte Carlo simulation can be computationally expensive. In this article, we present a general method that uses a recently-developed processor architecture [1, 2] capable of performing arithmetic on distributions to implicitly calculate the predictive posterior distribution with uncertain inputs. We show that our method implemented to run on a commercially-available implementation [3] of an uncertainty-tracking processor architecture captures the nuances of the predictive posterior distribution while being ∼108.80x faster than Monte Carlo simulation.
Cite as:
Petangoda, Janith, Chatura Samarakoon, and Phillip Stanley-Marbell. "Gaussian Process Predictions with Uncertain Inputs Enabled by Uncertainty-Tracking Processor Architectures." In NeurIPS 2024 Workshop Machine Learning with new Compute Paradigms.
Bibtex:
@inproceedings{petangoda2024gaussian,
title={Gaussian Process Predictions with Uncertain Inputs Enabled by Uncertainty-Tracking Processor Architectures},
author={Petangoda, Janith and Samarakoon, Chatura and Stanley-Marbell, Phillip},
booktitle={NeurIPS 2024 Workshop Machine Learning with new Compute Paradigms}
}