spateo.tdr.interpolations.interpolation_gaussianprocess.gp_models¶
Classes¶
The base class for any Gaussian process latent function to be used in conjunction |
|
The base class for any Gaussian process latent function to be used in conjunction |
Module Contents¶
- class spateo.tdr.interpolations.interpolation_gaussianprocess.gp_models.Approx_GPModel(inducing_points)[source]¶
Bases:
gpytorch.models.ApproximateGP
The base class for any Gaussian process latent function to be used in conjunction with approximate inference (typically stochastic variational inference). This base class can be used to implement most inducing point methods where the variational parameters are learned directly.
- Parameters:
- variational_strategy _VariationalStrategy
The strategy that determines how the model marginalizes over the variational distribution (over inducing points) to produce the approximate posterior distribution (over data)
The
forward()
function should describe how to compute the prior latent distribution on a given input. Typically, this will involve a mean and kernel function. The result must be aMultivariateNormal
.Example
>>> class MyVariationalGP(gpytorch.models.PyroGP): >>> def __init__(self, variational_strategy): >>> super().__init__(variational_strategy) >>> self.mean_module = gpytorch.means.ZeroMean() >>> self.covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel()) >>> >>> def forward(self, x): >>> mean = self.mean_module(x) >>> covar = self.covar_module(x) >>> return gpytorch.distributions.MultivariateNormal(mean, covar) >>> >>> # variational_strategy = ... >>> model = MyVariationalGP(variational_strategy) >>> likelihood = gpytorch.likelihoods.GaussianLikelihood() >>> >>> # optimization loop for variational parameters... >>> >>> # test_x = ...; >>> model(test_x) # Returns the approximate GP latent function at test_x >>> likelihood(model(test_x)) # Returns the (approximate) predictive posterior distribution at test_x
- class spateo.tdr.interpolations.interpolation_gaussianprocess.gp_models.Exact_GPModel(train_x, train_y, likelihood)[source]¶
Bases:
gpytorch.models.ExactGP
The base class for any Gaussian process latent function to be used in conjunction with exact inference.
- Parameters:
- train_inputs torch.Tensor
(size n x d) The training features \(\mathbf X\).
- train_targets torch.Tensor
(size n) The training targets \(\mathbf y\).
- likelihood GaussianLikelihood
The Gaussian likelihood that defines the observational distribution. Since we’re using exact inference, the likelihood must be Gaussian.
The
forward()
function should describe how to compute the prior latent distribution on a given input. Typically, this will involve a mean and kernel function. The result must be aMultivariateNormal
.Calling this model will return the posterior of the latent Gaussian process when conditioned on the training data. The output will be a
MultivariateNormal
.Example
>>> class MyGP(gpytorch.models.ExactGP): >>> def __init__(self, train_x, train_y, likelihood): >>> super().__init__(train_x, train_y, likelihood) >>> self.mean_module = gpytorch.means.ZeroMean() >>> self.covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel()) >>> >>> def forward(self, x): >>> mean = self.mean_module(x) >>> covar = self.covar_module(x) >>> return gpytorch.distributions.MultivariateNormal(mean, covar) >>> >>> # train_x = ...; train_y = ... >>> likelihood = gpytorch.likelihoods.GaussianLikelihood() >>> model = MyGP(train_x, train_y, likelihood) >>> >>> # test_x = ...; >>> model(test_x) # Returns the GP latent function at test_x >>> likelihood(model(test_x)) # Returns the (approximate) predictive posterior distribution at test_x