Assume that i am running the following regression:
\hat{y_t} = \beta_0 + \beta_1 \cdot x_t
where I set the following priors \beta_0 \sim N(0, 1) and \beta_1 \sim Beta(1,1)
Lets assume I have a dataset spanning from timestep 0 to t and call this dataset h_t.
Lets assume I obtain the posterior by some numerical method such as MCMC, lets denote these posterior estimates of our parameters as \theta.
Now I am interested in obtaining the probability of observing a SPECIFIC outcome z given some input x_t thus what I am interested in obtaining is the following:
p(y_{t+1} = z | \theta, x_{t+1}) = \int p(\theta|h_t)p(y_{t+1} = z | \theta, x_{t+1}) d\theta
I believe that this expression is equal to zero since we have continuous outcomes, this should lead to p(y_{t+1} = z | \theta, x_{t+1}) = 0 thus we have to define a interval to properly define this expression as the following:
p(y_{t+1} \in z_{interval} | \theta, x_{t+1}) = \int p(\theta|h_t)p(y_{t+1} \in z_{interval} | \theta, x_{t+1}) d\theta
Is this line of reasoning correct?