Positive definiteness error due to import of pyro

I have just done so and have found the reason for the difference/error.

I have been running the following with the debugger:

import torch
import pyro

cov = torch.load('cov_mat')
dis = torch.distributions.MultivariateNormal(torch.zeros(len(cov)), cov)
print(dis.sample())

with a break point on the line with “dis = torch …”.

When commenting out the import of pyro and running the script, the PositiveDefinite() function from torch.distributions.constraints is being called to check for positive definiteness.
The function there checks if the matrix is symmetric and if a cholesky factorization is feasible.

Here is the function:

class _PositiveDefinite(_Symmetric):
    """
    Constrain to positive-definite matrices.
    """
    def check(self, value):
        sym_check = super().check(value)
        if not sym_check.all():
            return sym_check
        return torch.linalg.cholesky_ex(value).info.eq(0)

When importing pyro on the other hand, the function PositiveDefinite() from pyro.distributions.torch_patch gets called instead. It checks if all eigenvalues are positive and is defined as follows:

@patch_dependency("torch.distributions.constraints._PositiveDefinite.check")
def _PositiveDefinite_check(self, value):
    matrix_shape = value.shape[-2:]
    batch_shape = value.shape[:-2]
    flattened_value = value.reshape((-1,) + matrix_shape)
    return torch.stack(
        [torch.linalg.eigvalsh(v)[:1] > 0.0 for v in flattened_value]
    ).view(batch_shape)

In the first scenario, the matrix passes both the symmetry test and the cholesky factorization test, while in the second it does not pass the test because some of the eigenvalues are negative.

I am using version 1.11.0 of pytorch but have checked and the function is still unchanged in the newest version of pytorch on github.

Since my matrix does contain some elements with small positive values in its diagonal, I am guessing that a higher computational error with one method than the other could maybe explain why one positive definiteness check fails and not the other. Or maybe a higher tolerance for one than the other could also be the reason?

From the algorithm I know for eigenvalue computations, they should be computed recursively. Since my matrix is rather big (2570 x 2570) I am assuming that the propagation of computational error is accumulating to a significant degree and since some positive values in the diagonal of my covariance matrix are rather small, that probably leads to a big error on the last eigenvalues computed. Problem that maybe doesn’t arise when doing a cholesky factorization? (I haven’t looked into the algorithms used for it). That would maybe also explain why I did not get the error when only taking subsections of the covariance matrix.