Tensor copy construct warning in svi.step()

Hi, I’m new to Pyro and have just been going through the tutorials to get familiar with it. I’ve been going through the inference intro and encountered a warning that made me wonder whether standard usage has changed.

I’m using Pyro 0.3.1 with Torch 1.0.1 on a Windows Jupyter Python3 environment using ipykernel 4.8.2.

In the code block

guess = torch.tensor(8.5)

pyro.clear_param_store()
svi = pyro.infer.SVI(model=conditioned_scale,
                     guide=scale_parametrized_guide,
                     optim=pyro.optim.SGD({"lr": 0.001, "momentum":0.1}),
                     loss=pyro.infer.Trace_ELBO())

losses, a,b  = [], [], []
num_steps = 2500
for t in range(num_steps):
    losses.append(svi.step(guess))
    a.append(pyro.param("a").item())
    b.append(pyro.param("b").item())

the svi.step(guess) call throws the following warning:

C:\Users\brynj\Anaconda3\lib\site-packages\ipykernel_launcher.py:2: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).

Is this a peculiarity of my environment, or perhaps from a recent usage change in PyTorch (see this PyTorch post)? It looks like I could simply change the call to something like svi.step(guess.clone().detach()) but this looks cumbersome (perhaps a Pyro change is coming that pushes this call down further into svi.step()…). For the time being, is there a safe convention for avoiding this warning?

Can you paste the full error trace? I think this is likely an issue with your model or guide. As the error message states, somewhere in your code, you are using the following pattern:

new_tensor = torch.tensor(old_tensor)  # maybe you are passing guess into a tensor constructor?

, which as the error message states you’ll need to change to .clone to get rid of the error message.

Sure. Here is a reduced version of the code from the tutorial:

import numpy as np
import torch
import pyro
import pyro.infer
import pyro.optim
import pyro.distributions as dist

import traceback
import warnings
import sys

def warn_with_traceback(message, category, filename, lineno, file=None, line=None):

    log = file if hasattr(file,'write') else sys.stderr
    traceback.print_stack(file=log)
    log.write(warnings.formatwarning(message, category, filename, lineno, line))

warnings.showwarning = warn_with_traceback

def scale(guess):
    weight = pyro.sample("weight", dist.Normal(guess, 1.0))
    return pyro.sample("measurement", dist.Normal(weight, 0.75))

conditioned_scale = pyro.condition(scale, data={"measurement": 9.5})

def scale_parametrized_guide(guess):
    a = pyro.param("a", torch.tensor(guess))
    b = pyro.param("b", torch.tensor(1.))
    return pyro.sample("weight", dist.Normal(a, torch.abs(b)))

guess = torch.tensor(8.5)

pyro.clear_param_store()
svi = pyro.infer.SVI(model=conditioned_scale,
                     guide=scale_parametrized_guide,
                     optim=pyro.optim.SGD({"lr": 0.001, "momentum":0.1}),
                     loss=pyro.infer.Trace_ELBO())
svi.step(guess)

and here is the error, where it looks like the issue is the way the scale_parametrized_guide() function defines a:

File “C:\Users\beldaz\Anaconda3\lib\runpy.py”, line 193, in _run_module_as_main
main”, mod_spec)
File “C:\Users\beldaz\Anaconda3\lib\runpy.py”, line 85, in _run_code
exec(code, run_globals)
File “C:\Users\beldaz\Anaconda3\lib\site-packages\ipykernel_launcher.py”, line 16, in
app.launch_new_instance()
File “C:\Users\beldaz\Anaconda3\lib\site-packages\traitlets\config\application.py”, line 658, in launch_instance
app.start()
File “C:\Users\beldaz\Anaconda3\lib\site-packages\ipykernel\kernelapp.py”, line 486, in start
self.io_loop.start()
File “C:\Users\beldaz\Anaconda3\lib\site-packages\tornado\platform\asyncio.py”, line 127, in start
self.asyncio_loop.run_forever()
File “C:\Users\beldaz\Anaconda3\lib\asyncio\base_events.py”, line 422, in run_forever
self._run_once()
File “C:\Users\beldaz\Anaconda3\lib\asyncio\base_events.py”, line 1432, in _run_once
handle._run()
File “C:\Users\beldaz\Anaconda3\lib\asyncio\events.py”, line 145, in _run
self._callback(*self._args)
File “C:\Users\beldaz\Anaconda3\lib\site-packages\tornado\platform\asyncio.py”, line 117, in _handle_events
handler_func(fileobj, events)
File “C:\Users\beldaz\Anaconda3\lib\site-packages\tornado\stack_context.py”, line 276, in null_wrapper
return fn(*args, **kwargs)
File “C:\Users\beldaz\Anaconda3\lib\site-packages\zmq\eventloop\zmqstream.py”, line 450, in _handle_events
self._handle_recv()
File “C:\Users\beldaz\Anaconda3\lib\site-packages\zmq\eventloop\zmqstream.py”, line 480, in _handle_recv
self._run_callback(callback, msg)
File “C:\Users\beldaz\Anaconda3\lib\site-packages\zmq\eventloop\zmqstream.py”, line 432, in _run_callback
callback(*args, **kwargs)
File “C:\Users\beldaz\Anaconda3\lib\site-packages\tornado\stack_context.py”, line 276, in null_wrapper
return fn(*args, **kwargs)
File “C:\Users\beldaz\Anaconda3\lib\site-packages\ipykernel\kernelbase.py”, line 283, in dispatcher
return self.dispatch_shell(stream, msg)
File “C:\Users\beldaz\Anaconda3\lib\site-packages\ipykernel\kernelbase.py”, line 233, in dispatch_shell
handler(stream, idents, msg)
File “C:\Users\beldaz\Anaconda3\lib\site-packages\ipykernel\kernelbase.py”, line 399, in execute_request
user_expressions, allow_stdin)
File “C:\Users\beldaz\Anaconda3\lib\site-packages\ipykernel\ipkernel.py”, line 208, in do_execute
res = shell.run_cell(code, store_history=store_history, silent=silent)
File “C:\Users\beldaz\Anaconda3\lib\site-packages\ipykernel\zmqshell.py”, line 537, in run_cell
return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)
File “C:\Users\beldaz\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py”, line 2662, in run_cell
raw_cell, store_history, silent, shell_futures)
File “C:\Users\beldaz\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py”, line 2785, in _run_cell
interactivity=interactivity, compiler=compiler, result=result)
File “C:\Users\beldaz\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py”, line 2909, in run_ast_nodes
if self.run_code(code, result):
File “C:\Users\beldaz\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py”, line 2963, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File “”, line 38, in
svi.step(guess)
File “C:\Users\beldaz\Anaconda3\lib\site-packages\pyro\infer\svi.py”, line 99, in step
loss = self.loss_and_grads(self.model, self.guide, *args, **kwargs)
File “C:\Users\beldaz\Anaconda3\lib\site-packages\pyro\infer\trace_elbo.py”, line 125, in loss_and_grads
for model_trace, guide_trace in self._get_traces(model, guide, *args, **kwargs):
File “C:\Users\beldaz\Anaconda3\lib\site-packages\pyro\infer\elbo.py”, line 164, in _get_traces
yield self._get_trace(model, guide, *args, **kwargs)
File “C:\Users\beldaz\Anaconda3\lib\site-packages\pyro\infer\trace_elbo.py”, line 52, in _get_trace
“flat”, self.max_plate_nesting, model, guide, *args, **kwargs)
File “C:\Users\beldaz\Anaconda3\lib\site-packages\pyro\infer\enum.py”, line 42, in get_importance_trace
guide_trace = poutine.trace(guide, graph_type=graph_type).get_trace(*args, **kwargs)
File “C:\Users\beldaz\Anaconda3\lib\site-packages\pyro\poutine\trace_messenger.py”, line 169, in get_trace
self(*args, **kwargs)
File “C:\Users\beldaz\Anaconda3\lib\site-packages\pyro\poutine\trace_messenger.py”, line 147, in call
ret = self.fn(*args, **kwargs)
File “”, line 27, in scale_parametrized_guide
a = pyro.param(“a”, torch.tensor(guess))
File “C:\Users\beldaz\Anaconda3\lib\warnings.py”, line 99, in showwarnmsg
msg.file, msg.line)
File “”, line 15, in warn_with_traceback
traceback.print_stack(file=log)
C:\Users\beldaz\Anaconda3\lib\site-packages\ipykernel_launcher.py:27: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad
(True), rather than torch.tensor(sourceTensor).

As I mentioned, this is the cause of the warning - guess is already a tensor and you are passing a tensor to the Tensor constructor. If you use guess directly, you won’t see the error message. If you must create a defensive copy, you can do so as follows:

a = pyro.param("a", guess.clone())
1 Like

Thanks, that’s helpful to know. For a new user (to both Pyro and Torch) that’s not obvious, since without knowing the conventions the torch.tensor(guess) call looks like a reasonable approach to invoke a copy constructor, and the Torch recommendations about using detach() and requires_grad() are very esoteric. I’ll work on the assumption that all such lines in the tutorials can safely be switched to using the clone() method.

Thanks for pointing this out! I didn’t realize this was coming from our introductory tutorial (I thought you had modified it in some way), and my apologies for an early round of debugging. :slightly_smiling_face: PyTorch added this warning in its last release, and our tutorial from earlier hasn’t been updated. We should definitely fix this, so that other users don’t get confused with this warning. If you would like to submit a Pull Request to get this corrected, that would be great! Otherwise, I will get to this shortly and update.

Thanks, thought this might be the case. Yes, I’m happy to submit pull requests on the tutorials as I work though them.

1 Like