Torch: not enough memory

Hey @martinjankowiak,

Thanks for your response. I tried this as follows:

if args.store_params:
    losses.append(loss)

    for i in range(2):
        if twoParams:
            # final_p_z_0[i].append(float(torch.sigmoid(pyro.param("p_z_q_top")[i]).detach().numpy()))
            final_p_z_0[i].append(torch.sigmoid(pyro.param("p_z_q_top")[i]).cpu())
        else:
            # final_p_z_0[i].append(float(torch.sigmoid(pyro.param("p_z_q_top")[:, i].mean()).detach().numpy()))
            final_p_z_0[i].append(torch.sigmoid(pyro.param("p_z_q_top")[:, i].mean()).cpu())

    for i in range(6):
        # final_w_top[i].append(float(pyro.param("mean_w_q_top")[i].item()))
        final_w_top[i].append(pyro.param("mean_w_q_top")[i].cpu())

    for i in range(15):
        # final_w_bottom[i].append(float(pyro.param("mean_w_q_bottom")[i].item()))
        final_w_bottom[i].append(pyro.param("mean_w_q_bottom")[i].cpu()) 

But sadly it did not help, and the program still crashed after 2690 iterations (same as before). Any other ideas? It would be greatly appreciated!

For reference, the related topics on this forum:
SVI-run consuming 100% memory -
Number of samples was too large, not applicable for me, I draw 100 samples for my latent variables.

GPU memory usage increasing across batches -
Suggested to use gc.collect(), I have implemented it but it does not resolve the issue at hand.

Implementing custom SVI objectives -
Touches upon the subject, and should resolve the issue at hand by adding .detach() to the loss function. My loss variable already does not require a gradient though. I don’t believe this is what I’m looking for.