I’m rather new to Pyro, so I’m still digesting the codebase. While I know exact inference isn’t ideal for computational efficiency, I’m using that (for now) because I’m reproducing some prior research and want to stick as closely as I can to their process.
I’ve been using the
Search class specified in
examples/rsa/search_inference.py. Given that I have a few hundred/thousand K samples to make, for exact inference, I was wondering if there’s a way to parallelize the search process.
My standard tool for parallelization is
dask but given some of their quirks wrt “all tasks must be futures”, I don’t see a way to it without significantly modifying a codebase I don’t fully understand.
multiprocessing.Queue whines about an inability to pickle the
EscapeMessenger. I’ve tried
multiprocess.Queue, but I seem stuck in a lock. For folks that have parallelized their inference process, do you have any recommendations on next-steps I could take?