A new idea for a science core course based entirely on computer simulation

I happen to come across this post from 2011 that I like so much, I thought I’d say it again:

Columbia College has for many years had a Core Curriculum, in which students read classics such as Plato (in translation) etc. A few years ago they created a Science core course. There was always some confusion about this idea: On one hand, how much would college freshmen really learn about science by reading the classic writings of Galileo, Laplace, Darwin, Einstein, etc.? And they certainly wouldn’t get much out by puzzling over the latest issues of Nature, Cell, and Physical Review Letters. On the other hand, what’s the point of having them read Dawkins, Gould, or even Brian Greene? These sorts of popularizations give you a sense of modern science (even to the extent of conveying some of the debates in these fields), but reading them might not give the same intellectual engagement that you’d get from wrestling with the Bible or Shakespeare.

I have a different idea. What about structuring the entire course around computer programming and simulation? Start with a few weeks teaching the students some programming language that can do simulation and graphics. (R is a little clunky and Matlab is not open-source. Maybe Python?)

After the warm-up, students can program simulations each week:
– Physics: simulation of bouncing billiard balls, atomic decay, etc.
– Chemistry: simulation of chemical reactions, cool graphs of the concentrations of different chemicals over time as the reaction proceeds
– Biology: evolution and natural selection
And so forth.

There could be lecture material connecting these simulations with relevant scientific models. This could be great!

P.S. For a few years Columbia College had a science core course, which was described like this:

On Mondays throughout the semester, four [science faculty] present mini-series of three lectures each. During the rest of the week, senior faculty and Columbia post-doctoral science fellows lead seminars to discuss the lecture and its associated readings, plan and conduct experiments, and debate the implications of the most recent scientific discoveries.

Seems a bit bogus to me. What kind of experiments are these students really going to plan and conduct? And, sure, debate the implications of the most recent scientific discoveries all you want, but I don’t think you’ll learn so much about science that way.

I much prefer my idea of the computer simulation course. And, just to be clear, it’s not a computing course (although, sure, the students will learn a lot of computing), it’s a science course. On the computer they’d be simulating real scientific phenomena such as gravity, evolution, etc. That would be important. I’m not talking about a statistics or machine learning course, valuable as that would be on its own terms. The computing here is a means to getting a better understanding of science. I’d bet a student will learn a lot more about evolution, chemical reactions, entropy, etc., by simulating these processes than by running live experiments, reading old books and articles by famous scientists, or seeing lots of math.

83 thoughts on “A new idea for a science core course based entirely on computer simulation

    • Octave is great if you have some Matlab code you need to run on a machine that doesn’t have Matlab installed.

      However, I think students would get more benefit from learning Python over Octave/Matlab.

      • My impression having tried all three is that with Python the language overhead gets in the way more. At least during the learning phase. And if the users aren’t programming savvy.

        Matlab / Octave slink more easily out of the way & let you focus on the math / science.

        Of course, for a large project, Python wins any day.

      • I have no experience with Python but I concur that Matlab is great for letting you focus on the underlying math and science. I’ve never used Octave but have colleagues who. Other than not having access to Matlab Toolboxes it seems a straightforward substitute for Matlab.

        While I don’t do Python I have a couple of reports who do. My impression is that if my time horizon were long term then Python is a better investment but that if my goal is to get results quick out of the box then Matlab is probably a better choice.

  1. What I really WANT to do is recommend Julia, but I haven’t used it enough to know whether it’s really ready for this kind of thing. How is the plotting and ODE solving? (https://github.com/JuliaLang/Sundials.jl looks promising, as does Gadfly)

    In the choice between Python and R, for *science* and simulation either one would work well, but I’d have to say R is a little more oriented towards the kinds of things that you want to do in science (solving numerical things, generating plots, etc).

    I’d also recommend considering NetLogo to do agent based models of things like biological/ecological phenomena. And then, you could even connect the agent based (microstructural) approach to the ODE (statistical averaging) approach.

    I’d honestly LOVE to develop and teach this kind of course.

    • But, I also think it would be hard. Especially at the incoming undergraduate student level. Even if you assume fairly hard-core science oriented high school students, to do a great job of this course you’d need to have some mechanics, chemistry, ODEs, probability, and computer science concepts that the high end high school student would have just a little bit of. The median incoming freshman would struggle heavily.

      If you could decouple it all from grades, so that the students could struggle, fail, and learn a lot without much risk, you might have a chance. But then, it would take up time that they need for other courses.

      Also, I think the lack of career credit for being a high end educator (as opposed to a researcher bringing in grants and publishing papers etc) makes it implausible for anyone but a near-retirement or emeritus professor to do. But then, they are probably not that familiar with the modern computing technology that you’d want to use, more likely they’d know FORTRAN than Julia or R or Python. It might be more plausible as a career move at a community college, but then the students would be on average even less prepared.

      Any thoughts on how to overcome these obstacles?

    • To elaborate on some issues I’ve noticed, at USC as a grad student TA I taught their intro programming for Engineers lab (not the lecture). It used Matlab. As part of it I developed a few modules, one of which was to model the flight of a spherical ball through a fluid.

      We used an empirical curve fit to the drag coefficient data on a sphere that I found online, then numerically integrated the ODEs. Once we had a numerical integrator, we’d then do things like take only a few data points from the output of the ODE and fit a parabola through them using least squares and try to find the point in time where the ball reached its maximum height and what the maximum height was. We’d compare that fit to the detailed data (at closely spaced time points) and discuss the concept of a Taylor series. We’d also solve for the x distance where the ball reached zero height (the flight distance, or range). Then we’d do things like search through angles to get the maximum range (with drag it’s not 45 degrees like in the drag free parabolic arc).

      We’d also do things like compare different regimes of behavior: a BB shot at 50m/s through air, through water. A cannonball, a tennis ball, a smooth golf ball sized ball, we could have added things like with a headwind, with a tailwind, etc. I had questions about why Ptolemaic ideas of impetus might have arisen (the path of a fairly light ball through air is actually pretty well approximated by an initial line going up straight, a little curved peak at the top, and then a straight fall downwards, the Ptolemaic trajectory is more realistic than the Newton’s trajectory without air resistance)

      I thought it was a really cool experiment, and actually these freshmen could in fact code it up with a little help. It wasn’t more difficult than some of the other labs thanks to the ode45 function in Matlab, and other similar utility functions (linear regression to fit the parabola, etc). But, it had problems. The course didn’t build on this idea. Understanding this little piece really well didn’t seem “important” and then the rest of the course was all about things like indexing arrays, or writing loops, or doing numerical derivatives, in isolation from this kind of project. A sort of “purposeless” learning of computer techniques. The engineering students then didn’t use any of it again until their 4th year when they took a computer based dynamics of structures type course. By then they’d lost all those skills.

      We tried to integrate a little of it into the statics course, and the dynamics course. I set up a few design problems, like in dynamics to design a crash impact buffer to ensure that a structure was protected from a car crashing into it while at the same time minimizing the chance of injury to the car driver. But because these were not core parts of the course, spending time on them took time away from being able to solve the midterm textbook by-hand problems that are standard in this kind of course. It felt to the students like they were wasting their time because it might actually hurt their grade.

      So, one lesson in isolation of course doesn’t really help advance things. But I think, even one *course* in isolation is problematic. Once you’ve done some computer based mechanics simulation, and then you sit through a typical lecture course on freshman physics with balls rolling down ramps and sliding blocks and air-hockey pucks… it’s all very problematic, the disconnect between the structures of the courses.

      • Well, already Matlab was ruled out based on its license/expense. Octave isn’t a terrible idea, and it actually is in some ways a better language than Matlab as it has extensions so you don’t need to put one-function-per-file for example. But my impression is that Octave is maybe a bit more limited than R especially in the graphics department.

        As I say, I’d like to recommend Julia, it certainly seems to be going in a good direction. But I haven’t personally used it enough.

  2. Another possibility is to structure a course around the process of scientific discovery: go through the history and discuss what led scientists to their current conclusions. A major problem I see with much science education is that the material taught can easily be perceived as just another body of dogma. I’d love to see a course that focused on the process of science and why scientists believe what they believe. Start with the evidence and reasoning that led the ancient Greeks to conclude that the Earth is round, let them calculate its diameter, even let them calculate the distance and size of the moon and determine that the sun is much larger than the Earth. Talk about the difficulties in settling the geocentric vs. heliocentric debate (the evidence was still ambiguous in Galileo’s time) and what finally clinched it. How we gradually came to realize that chemistry lies at the foundation of biology, e.g. finding that the same gas (CO2) produced in combustion is also exhaled by animals. And so on. Lots of great detective stories here.

      • Why must we go on forever teaching only about geniuses and their (apparent) incisive insights/discoveries?

        Geniuses so smart, they sound as if the sneeze well formed solutions?

        Which one of us would rather solve a minor but new problem live, in front of our students, than teach them to regurgitate those warn out and over-hyped past insights/discoveries of Copernicus, Newton and Watson/Crick?

        http://www.minioneers.com/amadeus-movie-quotes-1984/

        • Who said anything about only teaching that? No-one.

          Faced with the problem of convincing students that statistics (and science) is a bit more complicated than they at first think – not least because of the many turn-the-handle textbook presentations out there – then surely there’s a place for a few examples of creative thought. No?

    • +1000. How does one build a strategy for asking and then answering a question in light of current evidence and available (or nearly available) tools?

      but also +1 to Keith that the design should be careful to de-emphasize strokes of genius. Of course, a well-done course would be all about demystifying these as the result of careful observation and thinking.

  3. I think its a great idea.

    Suggest the word emulate rather than simulate to focus on whats importantly being done here – something being represented in silico for some purpose that needs to resemble that something well enough – successfully emulate it – so something can be learned about that something being represented.

    That something can be from the physical world (start there).

    Other somethings can abstract creations (diagrams, calculus, grammars, etc.) (second somethings to do).

    Harder somethings can be reasoning creations such as statistics (very last somethings to do).

    Note how things can be represented (possibilities), how representations are constrained (actualities of their reality) and how one should makes sense of the somethings being represented given the apparent success of the emulation (interpretation/inference).

    Have fun!

  4. Strongly agree with Van Horn. What is important in a core course is to impart an approach to the world about us. A science/modelling core course should be of value to all comers. Just as the value in a core course in history is to learn a historical way of viewing issues and not names, dates and places, so the value in a s/m core course should lie in learning to employ scientific method.

    • I would agree if only the stories were realistic.

      Thanks to the link from Jonathon “Studies of scientists building models show that the development of scientific
      models involves a great deal of subjectivity. However, science as experienced in school
      settings typically emphasizes an overly objective and rationalistic view. In this paper, we
      argue for focusing on the development of disciplined interpretation as an epistemic and
      representational practice that progressively deepens students’ computational modeling in
      science by valuing, rather than deemphasizing, the subjective nature of the experience of
      modeling. We report results from a study in which fourth grade children engaged in
      computational modeling throughout the academic year” https://docs.google.com/a/agentbasedphysics.net/viewer?a=v&pid=sites&srcid=bTNsYWIub3JnfHd3d3xneDo3MTFjNjhiNGYzZjU1Yzcz

        • Maybe you are thinking of the “Classical Mechanics: A Computational Approach” class by Gerald Jay Sussman and Jack Wisdom?
          There is also a book “Structure and Interpretation of Classical Mechanics.”
          (I sent I comment with links which is awaiting moderation, so I’m sending this link-less comment now)

        • “Structure and Interpretation of Classical Mechanics” is rather awesome, but my take is that it’s pretty heavyweight stuff, and not really in the category I think Andrew is talking about:-)

          It *starts* with Lagrangian Mechanics and – as the preface explains – the intent of the programming angle is partly about enforcing preciseness in place of all the implicit notational shortcuts in standard presentations.

          For anyone interested, the first edition is available as a free online pdf.

      • Also from the MIT AI group, there’s one by Abelson and diSessa is worth looking at:

        Turtle Geometry: The Computer as a Medium for Exploring Mathematics
        https://mitpress.mit.edu/books/turtle-geometry

        It aims to teach geometry, not physics or other science. However, it has chapters that teach about curved spacetime and concepts from general relativity. It targets high school and college students. I’m citing it more as an example of simulation-based instruction, than as a source for specific content for a simulation science course. In digging up the reference, I stumbled upon two interesting reviews:

        Fun and learning with a computer (Christian Science Monitor, 1981)
        http://www.csmonitor.com/1981/0513/051304.html

        “This way of coming at geometry emphasizes procedures rather than equations. And, since computer programs often produce surprises even for their programmers, studying geometry becomes a voyage of discovery.” And, quoting the authors, “The computer will have a profound impact on our educational system, but whether or not it will enrich the lives of students will depend upon our insight and our imagination.”

        AcaWiki summary
        http://acawiki.org/Turtle_geometry:_The_computer_as_a_medium_for_exploring

        “…it is focused on procedures rather than equations…”

        “It is a brilliant and clear example of the power of Logo and constructionism more generally. It shows — proves even — that the simple ideas at the heart of Logo can be scaled (quickly) to dizzying complex subjects. It is a template for what a constructionist guidebook should look like. Published in the early 1980’s, the book may very well have simply been ahead of its time.”

        Browse: Turtle Geometry at Google Books

        More in line with the proposed topic is Steven Strogatz’s book:

        Nonlinear Dynamics and Chaos: With Applications to Physics, Biology, Chemistry, and Engineering
        http://www.stevenstrogatz.com/books/nonlinear-dynamics-and-chaos-with-applications-to-physics-biology-chemistry-and-engineering

        Definitely more advanced than what we’re talking about here, yet it’s full of examples of complex phenomena that are easier to simulate than to study analytically. It’s been a while since I looked at it, but as I recall quite a few examples would not be hard for undergrads to simulate. I’m citing it for examples, not as an actual text for such a course.

  5. It is a great idea, and the description is not significantly different from that of courses we already have, e.g.,
    APAM E1601y Introduction to computational mathematics and physics
    APAM E3105x Programming methods for scientists and engineers
    APMA E4302x Methods in Computational Science
    APMA E4400y Introduction to Biophysical Modeling
    http://apam.columbia.edu/apam-courses

    • Jim:

      These are close but not quite there, for my taste:

      E1601 seems a bit too much of a methods course (it includes “curve-fitting and hypothesis testing”) and also seems a bit too physics-focused for a general course. But I guess it could be adapted to include other fields and be a good core course.

      E3105 uses Fortran 90. Also it’s all math and physics. It doesn’t seem focused on simulation at all.

      E4302 seems too much like a CS course: “Topics include but are not limited to basic knowledge of UNIX shells, version control systems, reproducibility, Open MP, MPI, and many-core technologies.” It’s not about simulating science.

      E4400 seems closer (although limited to biology) but it seems more about models than simulation.

      Also, 3 of the above courses are listed as not being offered next year!

      Anyway, I’m looking for a course that is laser-focused on simulation. Just enough programming to get them able to run simulations and graph the results; just enough science to get them to figure out what to be simulating. No data analysis, no hypothesis testing, none of that!

      You did, however, convince me that it might make sense for this course to be organized by the Applied Physics and Applied Mathematics Department.

  6. >>> they certainly wouldn’t get much out by puzzling over the latest issues of Nature, Cell, and Physical Review Letters. <<<

    I used to read the commentary in Nature / Science (not the actual articles) as an undergrad. I think that was fun & I learnt a lot.

  7. I would love to see a course on inference from data. Its not “just” a stats course — actually these usually/never touch upon these issues. Something in the lines of Royall’s statistical Evidence: A Likelihood Paradigm or of the more accessible Dienes’ Understanding psychology as a science.

  8. I don’t think language choice per se is the key issue, as much as providing wrappers that take care of overhead, like I/O, graphics, sliders/buttons, etc … so students can focus on experimenting with algorithms that relate to the science.
    As usual, use the highest level language that is feasible.

    About 45 years ago, I wrote IBM S/360 assembler macros for I/O so students didn’t have to learn that to get started.
    A better example was about 20 years ago. I sat in on a a class in a first course in computing taught by Princeton civil engineering, very popular
    It used SGI workstations, and all the programming assignments had graphical output, with all the machinery set up.

  9. I’d worry about simulation classes. One of the issues that I think is currently plaguing science is armchair modelers who believe their often-grossly-simplified simulations. I’ve gotten into arguments with people who believe that when they run several of their simulations with different initializations, the resulting spaghetti graph defines the variability of nature.

    Simulation is very useful as a part of statistics. Heck, it’s the foundation of current Bayesian statistics. But having undergraduates simulating nature and believing that their simulations have any value is a disaster waiting to happen. It’s a step backward to believing that brains work like neural networks or evolution works like Genetic Algorithms.

    • I agree with this. Simulating models of a problem is a very hard way to study them, and even people who professionally use the method frequently screw it up. I’d much rather the simulations were pre-existing, and the students were asked to design studies to measure the simulations. Alternative ways of understanding science include a statistics or philosophy of science course. A generalized research methods course might make sense as a core course.

      • David:

        Just to be clear: in this course of mine, I’m not suggesting that students design simulations (except possibly at the end of the course as some sort of capstone project); I’m suggesting that they do simulations. Kinda like in physics classes where you don’t expect students to come up with the laws of motion, you just expect them to apply these laws to solve problems.

        • The distinction you appear to be making between designing and doing is that when designing a simulation, a researcher doesn’t know which variables to manipulate or how to relate them to one another, and when doing a simulation, the student is being told which variables to manipulate and what to set their values at, and which equations to use. So I suppose I was wrong to make such a comparison between real world studies and classroom activities.

          In the classroom the student is manipulating a simulation and seeing if the simulation matches up with the real world. I think that experimenting on the real world to see how a dependent variable changes when you manipulate the independent variable is on average a simpler process than modeling a problem and then comparing the model to real world observations. I would much rather students learn how to experiment than that they learn how to simulate and compare.

        • David:

          Lab science is great and I’m not trying to stop anyone from taking a chemistry class or whatever. But my suggested course is supposed to be a replacement for the core science class which was supposed to be covering various big ideas in science. I think simulation can be a good way to learn some of these big ideas. Better than math for most students, and certainly better than reading the writings of great scientists.

        • Andrew:

          What exactly is “doing” a simulation? Does that mean the students get a ready code where they modify parameters & execute for various scenarios?

          Take for example your “simulation of chemical reactions”. What’s “designing” vs “doing” in this context?

        • Rahul:

          I’m not quite sure—I don’t know much chemistry, actually. I’m just going with analogy to physics classes where students learn the laws of motion and then solve problems using these laws. For chemical reactions, I guess they’d be told about various rates of transitions and then they’d have to program up these formulas, run them with different input parameters, graph them, etc.

        • Rahul:

          Asking students to “designing a simulation” would be to describe some phenomenon to the students (for example, the dissolution of a pill in water, or a simple example of evolution, and ask them to model and program it.

          Asking students to “do a simulation” would be to describe the phenomenon but also give the equations of the model, and ask the students to program these equations and to try it out, graph it, figure out what initial conditions and parameter values cause it to break, etc.

    • I totally agree that this is a big issue with a course focused on simulation. It does seem outrageously easy to slip from “I modeled this tiny part of the problem” to “I understand how nature works,” for both beginners and seasoned veterans. I’m not sure it’s a total deal breaker, though. Just that a clear head about modeling is something to build carefully into the curriculum.

      I also agree with David that learning from simulations is a tough problem. I had a few simulation modules in some classes. Invariably I only learned much of anything when I already had a good feel for the real system, at which point one can actually come up with interesting predictions and questions. The sensitivity of most simulations to initial conditions really limits the ability to just tool around and learn, unless you’ve already understand something about what the different conditions and variables really mean.

      Worse, without knowing the science already, it’s usually not clear whether results from the model are annoying artifacts or reflections of reality (see e.g. http://biology.stackexchange.com/q/410/72). The way around that, of course, is to 1) know the underlying science pretty well, 2) know the model, and 3) know the implementation. I completely agree that simulation is an awesome learning tool when you’ve got all three elements. My impression is that you are proposing (3) as the main objective of the course and then working backward to (2) and, eventually, (1). Practically, starting from ground zero with a freshman, how many times could you actually build back to the real science in the course of a semester?

  10. By coincidence, I’ve recently been contemplating a project somewhat along these lines – a molecular dynamics simulation of the greenhouse effect.

    This would be a qualitative simulation, with hard-sphere molecules bouncing off each other, perhaps with a spherical halo of attractive influence. Simulation would be event-driven and exact up to round-off error. The tricky part is putting in the radiation absorption and emission. I’d like to reproduce the blackbody spectrum and T^4 emission scaling without incorporating the full apparatus of quantum field theory. One would of course have to simulate gravity, but the potential can just rise linearly, without treating the Earth as spherical. A big uncertainty with regard to feasibility is how many molecules one needs to emulate basic phenomena at a qualitatively correct level.

    What’s the point? Aside from fun and education, I hope it would eliminate several extremes of the global warming debate, producing something more productive. There are some who claim that any degree of greenhouse gas warming is impossible because it violates thermodynamics. It doesn’t, but convincing someone of this by words may be hopeless. Others claim that it is all very simple, and skeptics are denying hundred-year-old science. This is also not correct. And finally, one gets to the crucial question of the magnitude of the effect, and of any positive or negative feedbacks that modify this magnitude. I’d hope to qualitatively simulate at least some of these feedbacks. It’s even conceivable that such a project, with wildly artificial molecules, might show effects that have not been previously investigated with respect to the real atmosphere.

    The final element to this probably quixotic project is that I’m thinking of writing it in R, which might seem totally crazy just from the performance viewpoint. We’ll see…. (maybe)

    • Radford:

      If there are really people who “claim that any degree of greenhouse gas warming is impossible because it violates thermodynamics”—and I’ll take your word that there are—I doubt any will be persuaded by a molecular dynamics simulation!

      • > persuaded by a molecular dynamics simulation!
        Agree, its hard to underestimate the percentage of folks who have difficulty understanding abstract concepts that do not benefit much from a physical model that represents/demonstrates the same concepts.
        (When David Brenner told me this many years ago, I dismissed it.)

      • > people who “claim that any degree of greenhouse gas warming is impossible because it violates thermodynamics”

        Personally, I would file those cases under “Never try to teach a pig to sing. It wastes your time and annoys the pig.”

    • Take a look at LAMMPS http://lammps.sandia.gov/

      I’m not sure molecular dynamics is the right way to think about this though. You’re going to have to model radiation as special molecules (photons) I’d think.

      Certainly if I were thinking of doing something like this and not going to use LAMMPS, I’d look hard at Julia which will compile to machine code!

      Rather than modeling the earth, I’d suggest modeling a column of air with periodic boundaries on the sides. Model day and night as a fluctuating radiation source at the top. But it’s a challenging problem!

      • Yes, I had in mind a column of air above a surface, with wrap-around at the sides. It needs to go high enough that the air has thinned out to the point of being essentially transparent to IR by the top.

        Modelling day and night wouldn’t be essential for a first cut. (I think the dynamics to reach steady-state in this simplistic model will be much faster than a “day” – though that’s a bit of a meaningless statement since with the artificial molecules there’s no direct connection to actual real time.) But it would be interesting to add.

        • What effect are you trying to capture? I imagine two types of molecules, those that are transparent to IR radiation, and those that absorb it. Begin with 1 million molecules of which only 2/1000 are absorbing. Add a hard ground layer that absorbs visible radiation and emits IR radiation. Bombard with visible radiation. Slowly create molecules that absorb IR near the ground surface, and plot the temperature of the air through time, and the temperature of the air vs fraction of molecules that are IR absorbing. I guess you also need to figure out re-emission. So you’re going to have all molecules emit IR photons at rate proportional to their kinetic energy^4 and with momentum and energy conserved.

          The emission and reabsorption part may not be possible with LAMMPS.

          If you’re going to do around say 1M molecules, you’ll probably need billions of photons to have it make any sense? That’s going to be hugely expensive computation wise.

          You might consider whether you can upscale the whole thing by having “molecules” represent averages over “packets” of air. Make absorption a probabilistic thing based on proportion of absorbing molecules in each packet, and imagine the speed of light being so fast relative to the velocity of air packets that you essentially do one time step of the air packets, then one “fast” timestep of absorption and reemission (don’t actually simulate the flight of photons, just iterate over all the pairs of packets and decide how much any two of them exchange energy.)

        • I was indeed thinking that one might get away with regarding the speed of light as infinite, so an emission event immediately produces either absorption by another molecule, or escape from the system. This also eliminates the need to adjust the momentum of molecules on emission/absorption, since in this limit the photons have zero momentum (even though they have non-zero energy). Of course, the photon energy has to get transferred to kinetic energy somehow. I have in mind that this happens when molecules with energy from absorption collide.

          I think representing packets rather than molecules would miss the point, since it would not be clear whether such a scheme was actually correct, and even if correct, it might be non-educational.

          I’m hoping to manage with less than a million molecules. Keep in mind that I’m not trying to reproduce reality quantitatively, only qualitatively.

    • >”without treating the Earth as spherical.”

      I would look into the sensitivity to that assumption. The assumption seems to be pervasive, yet little is published about the sensitivity. There was some discussion on this blog about it awhile back.

  11. Julia or Python both sound good. I just finished teaching a matrix algebra class using Julia and I liked it a lot. I used Jupyter on Sage Math Cloud (cloud.sagemath.com).

    Danny Caballero at Michigan State runs a simulation based physics course. If I remember correctly they use Python.

    • I agree that Python and Julia would be first choice. I think it would be a shame to train a whole school’s worth of students on MATLAB given the good alternatives and the weaknesses of MATLAB as a language and prevailing practice in the community.

      As a side note, I notice some implicit Fortran bashing. A year or so ago I ended up rewriting a neuron simulation from Python to Fortran, and discovered that the modern Fortran variants are really wonderful for expressing simulations. The explicit no-magic code style that Fortran encourages is well-suited to a class that is about learning how to simulate. Of course, the Fortran environment is a bad choice, but I’d try to import the coding style (explicit declaration of input/output variables, writing many small functions, arrays as the basic data structure) into whatever environment the class used.

      All that said, I agree with others that the language is probably less important than a lot of other questions about how to structure the course.

    • I think Radford’s point is to somehow make it more convincing by showing exactly how the molecules behave. But, as soon as you’re doing “hard shell” molecules and approximating radiation by some hand-waving, it’s not clear that the realism is still there. Specifically it’s not clear that it’s more realistic than a coarse grained method.

      One thing that might work well is actually to do a kind of lattice boltzmann method. In one time step, the particles at a lattice point stream around altering the statistical distributions at the points. Then, you do a radiation step where you radiate out photons along the lattices calculating absorption and re-emission based on the composition of the molecules at each point, then you repeat. Also since you can stream the photons out symmetrically in all directions, you can automatically take care of the momentum transport by photons issue.

      It’s still very much in tune with the thermodynamic issues, but it’s coarse grained enough to be tractable.

      • Actually the bit about automatically taking care of the momentum transport is maybe a little optimistic. But the basic idea that Radford mentioned of treating the speed of light as infinite is essentially the same idea as streaming photons around the lattices, letting them be absorbed at various points, and then relaxing the newly excited distributions to convert the radiated energy into kinetic energy.

      • Isn’t it a straw-man to fight the position “any degree of greenhouse gas warming is impossible”?

        Do people really have that sort of extreme position? The reasonable debate seems about the specifics of our planetary state.

        And is there any way to map the specifics of our state space to Radford’s model?

        • I kind of imagine that Radford is more interested in understanding some of the nuances than actually attacking some specific denialist position. Kind of a “how simple can the model be and still show the greenhouse effect?” and maybe “are there other interesting effects we haven’t noticed yet?” sort of thing.

        • Daniel:

          I think you’re right. But Radford did write this: “I hope it would eliminate several extremes of the global warming debate, producing something more productive. There are some who claim that any degree of greenhouse gas warming is impossible because it violates thermodynamics. It doesn’t, but convincing someone of this by words may be hopeless.” Which seems to suggest that he thinks this sort of person would be convinced by a computer simulation. That seems doubtful to me.

          It would be like me saying I want to develop Stan so I can convince those pi-denyers out there that the ratio of the circumference of a circle to its radius is not actually 3.

        • @Daniel

          From the point of view of understanding nuances as well, I’m not sure how an ad hoc MD model is relevant. Aren’t most of the nuances tied to the specifics of our boundary conditions, initial conditions, external forcing functions, chemical species material balances etc.?

          In other words, do you really need a freak combination of conditions to end up with some sort of greenhouse warming? Is it a rare point in parameter space? If not, then I suppose there should be a large set of parameters that should end up producing greenhouse warming like results in a simplistic MD model. But none of which have much of a bearing on our particular GW problem.

        • I think just showing that greenhouse effect of some kind is insensitive to specifics would be interesting. Showing that there’s a reliable relationship between concentration of absorbing molecules and temp would be useful too. At least from an educational POV.

          I do doubt the ease of accomplishing it though. Co2 is at 2-3 per 1000 concentration. You’d want at least several hundred molecules to have a chance of capturing radiation. That puts you in the realm of 100k to 1M molecules.

          A lattice Boltzmann calc would be a better use of cycles I think. You might even be able to investigate clouds in such a model.

          Why? I guess its just as easy to ask why not?

        • While I think the simulation being discussed here is interesting from a purely educational standpoint of learning the mechanics of numerical simulations, I think it’s a pretty awful approach to trying to understand the Greenhouse Effect and things which follow from it. I could be convinced otherwise a couple problems immediately come to mind.

          > I do doubt the ease of accomplishing it though. Co2 is at 2-3 per 1000 concentration. You’d want at least several hundred molecules to have a chance of capturing radiation.

          If you want to realistically illustrate the physical basis for the GH effect then your simulation needs produce information which maps to optical thickness – could do it directly or indirectly but if you can’t pull something which equates to optical thickness then you don’t have a simulation a (the?) key element of the GH effect.

          Local thermodynamic equilibrium. How does your simulation capture that? Simulating heat diffusion on a lattice is interesting but it’s not particularly (at all?) relevant to the GH effect.

        • Taking this in a little different direction, how long would it take to build a really simple general circulation model for student to experiment with?

        • > Taking this in a little different direction, how long would it take to build a really simple general circulation model for student to experiment with?

          Answering my own question: Probably quite a while. Fortunately, it’s already been done.

          Educational Global Climate Modeling (EdGCM):
          “Computer-driven global climate models (GCMs) are the primary tools used today in climate change research. Until now, however, they have been little more than a “black box” to most people. As a practical matter, few educators have had access to GCMs, which typically required supercomputing facilities and skilled programmers to run. The resulting lack of familiarity with climate modeling techniques has often engendered public distrust of important scientific findings based on such methodologies. EdGCM changes all this by providing a research-grade GCM with a user-friendly interface that can be run on laptops or desktop computers. Students can now explore the subject of climate change in the same manner as research scientists. In the process of using EdGCM, students will become knowledgeable about a topic that will surely affect their lives, and they will be better prepared to grapple with a myriad of complex climate issues.”

          Link = http://edgcm.columbia.edu/software2/

        • @Chris:

          I think the EdGCM software you linked to is the right way to go.

          What we need is a fairly realistic yet simplified version of a weather / climate model that someone can plug in parameters and see the results. Then a user could run what-if scenarios etc. using input parameters that make sense to him from our terrestrial situation.

          We need some simplified variant of the same models that professionals use. But I see no advantage to dumping the macroscopic approach and go all micro.

          That’s like convincing someone who doesn’t believe in Newton’s laws by analogizing them qualitatively using a density functional theory simulation.

        • Obviously, I disagree that this simulation is a bad way to learn greenhouse effect, though I think your ideas are also good ones.

          First off, I think Chris G may have mistaken what it wss being proposed. The idea is not to figure out how climate works (advection-diffusion, cloud formation, radiation, ocean circulation), but to figure out how certain molecules being in the air causes trapping of energy, end of story. That is, more or less the “pure” greenhouse effect.

          Second, optical thickness comes directly through prevalence of absorbing molecules in the gas. The larger the number of IR absorbing molecules, the greater the optical density to IR. As the “ground” absorbs incoming radiation and re-radiates at a different wavelength, the re-radiated light interacts with absorbing molecules in the gas and is converted to kinetic energy.

          Third, the Lattice Boltzmann type simulation isn’t just “heat diffusion on a lattice” it’s actually a stat-mech based simulation of fluid mechanics. It doesn’t solve the heat equation, it solves the Boltzmann transport equations, which is mass, momentum, and energy transport due to collisions of molecules. Bolting on a radiative component would allow you to see the effect of optical density.

          https://en.wikipedia.org/wiki/Boltzmann_equation

          https://en.wikipedia.org/wiki/Lattice_Boltzmann_methods

        • Yes, Daniel is more-or-less right about what my proposal was. Though it does go a bit beyong the direct effects of molecules absorbing IR, to include the immediate responses of the temperature going up, and hence the profile of density with height changing, which feeds back on the IR absorbing effect… If the system simulated is big enough, one might also see basic convection phenomena.

        • Radford: on convection phenomena, It’d be interesting to have the “ground” have say two different albedos, so there’s non-constant heating of the ground. You might see essentially “thermals” like the ones glider pilots and vultures seek out.

        • The problem with GCM is that it’s a gazillion degree of freedom nonlinear dynamical system… the validity of GCM based predictions is highly questionable as they depend sensitively on things we don’t know (like how many butterflies flap their wings, or how much cow flatulence there is in Wyoming)

          They’re useful, and you can get good qualitative results by running a lot of GCM simulations with different parameters, but it’s a nontrivial amount of computing. Like, to get anything really useful you’d probably start with EdGCM and a classroom full of laptops and then come back in a week after post-processing 10,000 runs or whatever.

        • But couldn’t that criticism be made of almost any fairly complex Bayesian model? A lot of priors could be filed under the “things we don’t know” bin, right?

          Besides, the sensitivity to initial conditions feature (BUG?) exisits any many non-linear models e.g. weather, chemical simulations etc. but we still seem to use their output pragmatically?

          Admittedly, GCM may be worse but that’s the peculiar nature of long term climate modelling. But are there any competing approaches for Global Warming modelling that are better? i.e. Insensitive to initial conditions etc?

        • Rahul: I just think that for educational purposes, you have to be careful how you use something like GCM. There is definitely a sense in which a young student could get the wrong impression about what can and what can’t be done.

        • > The problem with GCM is that it’s a gazillion degree of freedom nonlinear dynamical system…

          That’s a feature not a bug:-)

          Imagine a course which addresses simulations of increasing complexity. Start with a billiard ball problem – deterministic, 1-D, pretty easy to wrap your head around. (I’m thinking too about possible visuals to accompany numerical results.) From a deterministic, 1-D case you move on to something more complicated like an oscillating chemical reaction – multi-D solution from a system of first order linear (maybe linear?) differential equations. In both of these cases you wouldn’t have to work too hard to find data to compare your simulation against and creating interesting visuals to accompany might be tractable on a timescale appropriate for a semester-long class.

          Thinking in terms of increasing complexity, maybe something like spin diffusion on a lattice? The more general question I think is: What are some tractable “toy” problem which would provide some insight into substantive real world problems? (BTW Radford, if you can do a simulation which shows convection I think that would be spectacular.) (A little off-topic: When I was a postdoc the guys in the cube next to mine did simulations of turbulent reacting flows. Talk about difficult simulations…) Getting back to GCMs, I think that ending the semester showing both the potential and the pitfalls of very complex simulations would be appropriate. Showing sensitivity to initial conditions, forcings, etc for a single-run result but that you get an accurate result with respect to data (for something simple like average surface temperature) if you average a ton of runs would be instructive I think.

      • Let me restate my point. These are two different hypothesis:

        H1: No set of conditions can ever lead to a greenhouse effect

        H2: Our specific set of conditions do not lead to a greenhouse effect

        I think the Radford approach addresses H1 but does nothing relevant to H2. In reality, I doubt H1 is a position that many take. Even among AGW deniers. Ergo, addressing H1 is a straw-man (in my opinion).

        Nevertheless, even if we *did* want to refute H1, I question the need to have to go down to the molecular level. I’m sure H1 can be refuted by perfectly valid macroscopic equations quite easily.

        My point is that, one needs to have a very good reason to model a macroscopic observation on a molecular level. It isn’t an efficient way to do things. For most macroscopic cases abstraction works very well: e.g. well tested theories of transport, chemical kinetics, heat transfer, Navier Stokes etc.

        Only when the abstraction breaks down (e.g. non-continuum: transport in very high vacuum systems, virus pathogen models), or cases where we don’t have good abstractions (e.g. predicting Arrhenius constants of arbitrary elementary equations) do we need molecular modelling.

        And I’m straining to see how modelling GW (even H1 above) can justify that specific need to go down to the molecular scale.

        • > My point is that, one needs to have a very good reason to model a macroscopic observation on a molecular level. Only when the abstraction breaks down …, or cases where we don’t have good abstractions … do we need molecular modelling.

          I agree wholeheartedly. You can capture the GH effect (and get agreement with observational data to ~0.1% if I recall correctly) by assuming that the atmosphere is a stack of plane-parallel slabs, using Beer’s law to compute the transmission of each layer, and the Planck function with appropriate emissivity (emissivity = 1 – transmission) as the source term for each layer. The macroscopic treatment captures observation. A micro treatment of Beer’s law could be instructive in and of itself, i.e., get into the quantum mechanics underlying molecular absorption features, but not essential for simulating the GH effect.

          A good atmospheric radiative transfer reference: R.M. Goody, “Atmospheric Radiation: Theoretical Basis”

          For what’s worth, I’ve got a copy of Mark Jacobson’s “Fundamentals of Atmospheric Modeling” on my desk at home – https://web.stanford.edu/group/efmh/jacobson/FAMbook/FAMbook.html I’d envisioned it being recreational reading (seriously, I did) but life has not played out that way:-)

        • Of historical interest: G.C. Simpson, “Further Studies in Terrestrial Radiation,” Memoirs of the Royal Meteorological Society, vol. III, no. 21, p.1-26 (1928).

          Link = http://empslocal.ex.ac.uk/people/staff/gv219/classics.d/Simpson-further28.pdf

          (Once upon a time I had a particularly loathsome boss who questioned everything those who worked for him did. Everything. It was a power thing – relentless questioning to keep people off-balance and cultivate insecurity, but I digress. Anyhow, among other things he questioned the legitimacy of presuming that standard approaches to clear-air radiative transfer have an observational basis. Simpson’s paper came in building my case for the affirmative.)

        • I agree that MD is probably not the way to go directly. But I also disagree that you should necessarily go fully macroscopic. Specifically, if you’re trying to show how the prevalence of a molecule affects transmission, absorption, reemission, you don’t want to go to a system which doesn’t have a concept of a molecule and instead uses a macroscopic property like “transmissivity” or something.

          I think the right level here is the Lattice Boltzmann method. It models the distribution of molecules rather than the actual molecular trajectories, and therefore it can potentially have transmissivity and soforth be the emergent properties that they are.

          http://www.palabos.org/

          Might be a good place to start. But you might have to code up this simulation directly in order to get the radiative effects. I’m not sure if Palabos handles that.

Leave a Reply to Daniel Lakeland Cancel reply

Your email address will not be published. Required fields are marked *