Asymptotic normal distribution of Bayesian inferences

A colleague writes with a statistics question:

I want to make a point about a bayesian interpretation of confidence intervals. I recall that there is a result (Bernstein-von mises thm?) that links uninformative prior distributions to normal posterior distributions with a variance equal to the inverse of the fisher information. My concern is to say something like if you have no or little prior knowledge your regression estimates and standard errors can be translated into a normal posterior distribution with a mean equal to the regression estimate and variance equal to the estimated variance. Is this on the right track?

My reply: Yes, this is discussed in Chapter 4 of Bayesian Data Analysis, with some technical details in Appendix B of that book. One thing we did in chapter 4 which was pretty fun was to come up with lots of counterexamples–the theory is basically true, but there are issues with estimates on the boundary of parameter space, unbounded likelihoods, and various other pathologies.