Posterior sampling on a conditioned distribution

The question is how to sample from a conditioned distribution.

The problem is the following:
I want an agent able to simulate the behaviour of a subject that take part to an experiment.
The experiment consist of n trials. During first k trials, the subject is learning a variable (a binary shock expectancy rating) given an observation (a visual stimuli). During the remaining trials the subject uses the knowledge learned in the previous trial to predict shock expectancy given the observation. So my idea is to learn the parameters of the Multinomial distribution during the first k trials, then using the learned parameters of the distribution to do the prediction of the expectancy ratings given as input the visual stimulus.
The training part is this:

prior_aggressive = torch.ones(2)
prior_not_aggressive = torch.ones(2)

def model(data):
    not_agg_data = data[data[:,0] == 0]
    agg_data = data[data[:,0] == 1]

    count_values_1 = torch.tensor(np.append(np.unique(not_agg_data, return_counts=True, axis=0)[1], 0))
    count_values_2 = torch.tensor(np.unique(agg_data, return_counts=True, axis=0)[1])

    theta1 = pyro.sample("theta1", dist.Dirichlet(prior_aggressive))
    theta2 = pyro.sample("theta2", dist.Dirichlet(prior_not_aggressive))

    counts1 = int(count_values_1.sum())
    counts2 = int(count_values_2.sum())

    pyro.sample("likelihood1", dist.Multinomial(counts1, theta1), obs = count_values_1)
    pyro.sample("likelihood2", dist.Multinomial(counts2, theta2), obs = count_values_2)