I put this in the Prior Choice Recommendations wiki awhile ago:

“The prior can often only be understood in the context of the likelihood”: http://www.stat.columbia.edu/~gelman/research/published/entropy-19-00555-v2.pdf

Here’s an idea for not getting tripped up with default priors: For each parameter (or other qoi), compare the posterior sd to the prior sd. If the posterior sd for any parameter (or qoi) is more than 0.1 times the prior sd, then print out a note: “The prior distribution for this parameter is informative.” Then the user can go back and check that the default prior makes sense for this particular example.

I’ve not incorporated this particular method into my workflow, but I like the idea and I’d like to study it further. I think this idea, or something like it, could be important.

Sounds good; I’m going to try it out. What about some summary of the prior densities of the posterior samples of θ? You know the classic density plot of prior, likelihood and posterior that appears in almost every intro Bayes course, talk, website, book? Where the three don’t overlap much? That always annoys me because the prior there is a really bad one.