Instead of directly calculating P(θ|y), the modern Bayesian approach is to randomly sample from the posterior distribution and approximate the posterior distribution from a large number of samples. With a large enough number of samples (often 10,000 or more), a very precise approximation is possible. The approach requires sophisticated computational methods such as Monte Carlo Markov Chain (MCMC) (Gelman et al. 2013). This approach is so routine (and automated) now, that MCMC methods are even used when a closed-form solution is possible. Some of the most well-known MCMC algorithms available in SAS (or other programs) are Metropolis, Gibbs, ARMS, Gamerman, Metropolis-Hastings, Random Walk Metropolis and Independent Metropolis.
Whatever MCMC algorithm is used, that algorithm needs to actually sample independently from the true posterior for the output from the analyses to be valid. This must be assessed for each analysis. The first several thousand samples generated may not correspond to the true posterior (i.e., a proposed distribution may not be similar enough to the actual posterior) and the algorithm works correctly when subsequent samples move towards the true distribution. When a Markov chain has reached its stationary stage the algorithm is said to have converged to posterior distribution.
Convergence of the algorithm is very important because this is the only point when inferences are expected to be accurate. There are several graphical and analytical diagnostic methods to determine whether the convergence of the algorithm has occurred (see
Bayesian Analysis with SAS).