Aki prepared these slides which cover a series of topics, starting with notebooks, open code, and reproducibility of code in R and Stan; then simulation-based calibration of algorithms; then model averaging and prediction. Lots to think about here: there are many aspects to reproducible analysis and computation in statistics.

Could anyone give a brief explanation of how computer hardware feeds into the exact reproducibility of Stan analyses, and how and why there are differences in floating point arithmetic that also affect this?

I am reasonably experienced using Stan in practice, and had noticed that different computer systems gave different sample chains for the same Stan version and analysis scripts. However, my knowledge of computer science is fairly limited and I would like to better understand these factors.

Your question has a celebrated history. Here’s one answer, https://www.floating-point-gui.de/

Thanks for the link. I have some awareness of the issues surrounding floating point arithmetic, but I don’t have an in-depth understanding of the topic so any reminders are useful. I had assumed that the representation of floating point numbers using C++ is standardised and would not be affected by the platform or hardware used, but it seems that this assumption is not correct.