I wrote this for the book review section of the American Mathematical Monthly. I’ll discuss the other books reviewed in tomorrow’s post, but here I wanted to share what I wrote about Lakatos’s book. And, yeah, yeah, I know from the last time this came up that many of you disagree with me on the virtues of this book.

Anyway, here goes:

Reflections on Lakatos’s “Proofs and Refutations”

My advanced math classes in college followed a standard pattern: in the beginning of the semester were the definitions, then came the lemmas, then the theorems, culminating at the end of the semester with the big proofs and then, if there was time, maybe some applications along with the much-despised “heuristics.” And not a counterexample to be found. These were theorems, after all. A theorem is true and so has no counterexamples, right?

It was only in my senior year that I learned the proper order of mathematical reasoning: first the problem, then the theorem, then the proof, with the definition at the very end. The definitions come at the end because they represent the minimal conditions under which the theorem is true. The statement of the theorem itself changes as the proof and definitions develop. And, just as a country is defined by its borders, a theorem is bounded by its counterexamples, which are duals to its definitions.

I gained this perspective by reading Proofs and Refutations, a book taken from the Ph.D. thesis of the great philosopher Imre Lakatos. Lakatos’s later work was in the philosophy of science, where his synthesis of the ideas of Karl Popper and Thomas Kuhn led to a sophisticated falsificationist model of scientific practice which influenced generations of social scientists. Proofs and Refutations finds a similar empirical sprit within mathematics.

The physicist Eugene Wigner wrote about “the unreasonable effectiveness of mathematics in the natural sciences.” Flipping this around, Proofs and Refutations discusses the effectiveness of scientific inquiry in mathematical research, an idea which is commonplace in our modern era of computer experimentation but was controversial in 1964, when Proofs and Refutations was published, or even twenty years after that, when I was taking all those conventionally- constructed math classes.

In his book, Lakatos develops his ideas through a development of Euler’s formula of the faces, edges, and vertices of polyhedra. It turns out this theorem has lots of counterexamples. One at a time Lakatos brings these up, in the setting of a fanciful conversation among hypothetical mathematics students, following the ideas of mathematicians working on this problem in the 1800s, and these hypothetical students manage to rescue the theorem from each counterexample, but at an increasing cost, requiring elaborate strategies of “exception barring” and “monster barring.” As a mathematics student, I found the story dramatic, even gripping, and when I was done, I had a new view of mathematics, an understanding I wish I’d had before taking all those courses—although I might not have been ready for it then.

Mathematical definitions are, by their nature, as precise as they need to be. As the theory is expanded, the definitions become more specific. And I think the general point of the interplay between proofs and refutations is very relevant to modern mathematics. As an applied statistician, I’m more of a user than a developer of mathematics—in my career, I’ve proven only two results that I’d dignify with the title “theorem,” and one of them turned out to be false. But the same alternation of proof and counterexample arises for me when developing statistical methods and applying them to live problems.

**P.S.** I sent the above to Jordan Ellenberg, who wrote:

I completely agree! Well, maybe I incompletely agree. What I have found as a teacher is that there isn’t a correct or incorrect order, but rather that different students resonate better with different orders; so over the course of a whole semester I try to present different topics in different ways, so that everybody is pleased some of the time. But certainly I would consider a course without attention to counterexamples and apparent counterexamples to be woefully insufficient. In fact, just yesterday in real analysis I was talking about the ordering on Cauchy sequences and wrote

Definition (wrong) We say (a_1, ..) > (b_1, …) if….

then explained what examples made that a bad definition, crossed it out, and wrote

Definition (right) We say (a_1, ..) > (b_1, …) if….

I suppose I have a bit of “pedagagy recapitulates phylogeny” attitude where I want the cognitive experience of being in class to mimic the cognitive experience of doing mathematics as closely as possible, since the latter is a cognitive experience most of the students need introducing to.

Theoretical economics research is just like Lakatos says too. You can prove anything, if you get to choose all the assumptions and definitions. What you actually do is start with an idea, such as “The set of equilibrium prices is unique”. You set it up precisely, as a theorem defining “equilibrium”, “unique”, “set”. Then you try to prove it. You *always* find that your theorem is false. In this case, what came up was that you need to make an assumption called “gross substitutes”, I think, (or “just two goods” might do the trick). You’re sad. While the idea isn’t dead, it’s much weaker than you thought, and it’s uglier because of the caveat. Sometimes, your idea does die— it has a fatal flaw at its heart, and you don’t think it’s worth fixing. If it doesn’t die, you keep finding problems. Eventually, though, with enough extra assumptions or caveats on what “equilibrium” or “set” means, you’ve got a true theorem. The problem is that very often when you send it to the journal, or tell it to a friend, the response is “That’s just a weird special case with all those assumptions and that funny definition that you should just toss it.”

So the enterprise in economic theory is to find not an idea that can proved, but an idea that can be proved cleanly enough. My impression is that the same is true in math and stats.

I’m gonna start a slap fight with a cheap shot.

> The problem is that very often when you send it to the journal, or tell it to a friend, the response is “That’s just a weird special case with all those assumptions and that funny definition that you should just toss it.”

In my experience, the funny special cases with all the assumptions are assumed to be general and become bedrock models for steering American monetary policy. Locally non-satiated, continuous, quasi-concave expected utility over all commodities with free disposal and arbitrage-free complete financial markets certainly feels like a lot of assumptions and funny definitions to me.

“the funny special cases with all the assumptions are assumed to be general and become bedrock models for steering American monetary policy”

What’s the alternative?

What’s the alternative?

Humility.

^

To be a little more elaborate, I don’t have much problem with attempting to set interest rates based on some GE-founded ISLM or whatever. Wrong models can and do produce illuminating results, there’s good reason to believe the typical results are at least directionally correct, and the fed has to pick some amount of cash to inject even if it’s zero. Though I don’t trust these any more than heterodox approaches like agent based modeling or chaos theoretic views.

The issue is that when economists have a model they like, they, in the neoclassical tradition, prefer it to the empirical world. I wasn’t at the fed in when Ray Radford TheSchlong and Bush Jr were deregulating financial derivatives, but my understanding is that malfeasance in the sector by 2007 was plainly obvious to everyone on Wall St, and plenty of people were screaming about it, but it wasn’t even checked because it was supposed to be unprofitable. Likewise, Schiller and Co were screaming their heads off about a downturn whose magnitude was supposed to be impossible, with stories that in hindsight played out quite specifically. Meanwhile, at the fed

“On the national level, risks seem to have risen lately, but my sense is that prospects are still reasonably sound. Subprime mortgages, obviously, have dominated the financial news in recent weeks. Concerns about the welfare of families suffering foreclosures are quite natural, and anecdotes about outright fraud suggest some criminality. But my overall sense of what’s going on is that an industry of originators and investors simply misjudged subprime mortgage default frequencies. Realization of that risk seems to be playing out in a fairly orderly way so far”

All they wanted to pay attention to was interest and inflation because those were supposed to be the big levers. It seems that there’s an under-appreciation of the epistemic uncertainty inherent to the enterprise of mathematical modeling. Not the standard errors of coefficients, but the possibility that your model excludes altogether an important feature of reality. By all means, pay attention to the interest rate and inflation, but also investigate real stories with your eyes and words, pull on engaging narrative threads. I daresay confronting bespoke economic threads with economic knowledge is more important than fine tuning interest rates using models that work when all is going well. To make an analogy, if you’re flying a plane, the gyroscope and the altimeter and the GPS are all great, and losing altitude at a certain airspeed velocity might be “physically impossible”—but don’t black out the windshield. Computations could always be wrong, your devices could always not be hooked up, there could be a computer glitch, look with your eyes. In fact, situations where the computer is broken, though you might never encounter one, are where your job is most important.

To take a less charged example of economists’ anti-empirical bent, let’s take the intellectually interesting but politically inconsequential story of where money comes from. The neoclassical story is that it came about to solve mutual coincidence of wants, providing liquidity to barter economies. It’s a good story, and makes a lot of sense from where we sit, meaning in a monetary economy. But it’s empirically falsified—anthropological evidence for said barter economies does not exist, and known historical examples of money creation have more to do with taxation by nation states to fund armies than liquidity.* Faced with this, economists have not budged an inch and continue to teach the mutual coincidence of wants origin. The story makes so much sense it’s more real to them than reality, I guess.

Another example is credit. They still teach undergrads that the federal reserve changes the reserve ratio requirements and that causes the banks to lend out more money. No bank checks for reserves before lending out money. They just create the loan, create an asset and a liability, the money is made up on the spot. This isn’t a theory either. It doesn’t require a theory, it’s a knowable fact about banking software.

This has gotten off topic to general beating up on economists, but to try and circle back to my point, the alternative is to know that models are models. If a model says something is impossible or has a 1/1 quintillion chance of happening, pay attention to it anyways.

*Argument cribbed from David Graeber

somebody said,

“It seems that there’s an under-appreciation of the epistemic uncertainty inherent to the enterprise of mathematical modeling. Not the standard errors of coefficients, but the possibility that your model excludes altogether an important feature of reality. “

+1

“I’m gonna start a slap fight with a cheap shot.”

+1 – This phrase deserves an exalted position in this comment section alongside historical greats like “beef patties” and “tl;dr”.

PS – sorry Andrew, know the beef pattie withdrawl must be tough, but we’re here for you. And hey, you can put Matouk’s on other stuff too.

Jrc:

I deserve no credit for tl;dr. It’s all over the web, nothing special about this blog.

The interesting question is, can you make a case that any of the assumptions is *critically* violated in the real situation of interest, meaning so that it affects the conclusions?

I’d say that the model is not only and sometimes not mainly good for guiding our actions and for being taken as “true”, but often it’s rather the opposite: The theory tells us which conditions are crucial, and a major way of learning from the model can be to figure out where they are critically violated. Only if we’re lucky, they’re not, and we aren’t always lucky.

+1. I learned about the upper hemicontinuity of the excess demand function, required so that the Kakutani fixed point theorem can be applied to make existence proofs of economic equilibria, at exactly the time I was reading Proofs and Refutations. While choice sets and preference orderings struck me as reasonable abstractions to the economic equilibrium problem, the mathematical machinery that had to be airlifted in to make the proofs airtight was always troubling… much as pure exchangeability assumptions trouble me in statistics — in both cases I soldier on with a small nagging mathematical doubt that my practical side ignores when convenient.

Criticism is easy, but the more interesting question is “is there a better way?” It’s not as if economists *like* having to do this kind of thing. It’s more that there’s no tractable alternatives.

These issues illustration why the vast majority of economists don’t do any theory work.

The psychological theory of role models explains why economics went off the rails in their effort to prove that there are Pareto-optimum outcomes with stable equilibrium. The role models for these economists are their calculus professors and the accomplishments of the physicists. Study law and politics reveals that there is no Pareto model in actual social relations so we invent social institutions besides enforcing contracts to solve complex problems with no equilibrium solutions.

This criticism is at least 30 years out of date. I’ve been an economist for 15 years, and never once have I attempted to “prove that there are Pareto-optimum outcomes with stable equilibrium,” nor have any of my role models.

Mj:

I don’t know, but lots of old obsolete ideas still get taught.

Decades ago I studied engineering and law with two semesters of accounting (to satisfy requirements for a two year business degree). The DSGE models still apply the concept of general equilibrium whereas a realistic agent-based model would be forced to break the assumptions that give “equilibrium” solutions. Instead one would have dynamic instability and the possibility for homeostasis. The meaning of “efficiency” in economics reduces to the moral problems of law including, but not limited to, the contract law:

https://surface.syr.edu/cgi/viewcontent.cgi?article=1066&context=lawpub

The Bible says, “Between buying and selling sin is wedged in.” I take sin in this context to mean errors in human judgment when taking action and errors forecasting outcomes as perceived by self and others in society. Then it is clear that we cause homeostasis or a complex outcome in our financial systems with private property and economic transactions. We do this by customs of law that resolve the “sins”.

Back in the day, a lot of papers in resource economics, particular fisheries economics, that were concerned with optimal control given a deterministic model, would define some really complicated and general dynamic equations, and then in order to get a solution, said “assume an interior solution”. If you know anything about optimization, you know that simple phrase just subsumed many (100’s ?) of unstated constraints on the form of the equations, quite possibly requiring forms of the equations that were totally unrealistic. But it was quite the cottage industry for awhile, and a number of people become fairly well-known milking this.

I believe that just that kind of attitude in Canadian resource economics contributed to the collapse of the Atlantic cod stocks — based on policies directly contradictory to the experience and advice of inshore fishers.

The same is true for the rock fish fishery off California; I remember a California Fish and Game biologist explaining to me and a fisherman friend why they shouldn’t be banning gill nets for rock fish, back in the 1970s when I was a graduate student. I knew what he was talking about, and had my doubts, but didn’t know enough yet to articulate what was wrong.

I teach the cod collapse, and what happened was that the scholars’ models were indeed too optimistic, but the fisherman were crying out to stop paying them any attention and keep generous fishing limits, so the politicians, figuring the scholars were being too conservative, didn’t take the action they needed to.

And it wasn’t the economists who were too optimistic, it was the scientists who were estimating the biology of the fishery population.

I would love for a post-mortem analysis of this whole affair, including all the sordid technical and mathematical details. Do you know where I could find one?

Here is a long essay. https://environment.probeinternational.org/2000/01/18/unnatural-disaster-how-politics-destroyed-canadas-atlantic-groundfisheries/ . It focuses on how the cod fishermen and politicians ignored the modellers, but if you read carefully you see that the modellers were also too optimistic, probably to please the politicians, just not *as* overoptimistic.

And the inside story is that the analyses that were less optimistic, even dire, were suppressed. It is quite simple to do. Just require an internal review process, and then have the process say the analyses were not adequate or proper. It is a very efficient way to get desired results. I have been in government for many years, and while it has happened some in all administrations, some were notorious for this. You are given three guesses as to the worse offenders.

PS – How do you say “Sharpiegate”

These three posts just increase my skepticism of economic theories.

> effectiveness of scientific inquiry in mathematical research, an idea which is commonplace in our modern era of computer experimentation

But also to Peirce.

To CS Peirce, mathematics is the manipulation of diagrams or symbols taken beyond doubt to be true – experiments performed on abstract objects rather than chemicals – diagrammatical reasoning.

Some excerpts from “Peirce’s Notion of Diagram Experiment. Corollarial and Theorematical Reasoning With Diagrams”, in R. Heinrich, E. Nemeth, W. Pichler and D. Wagner (eds.) Image and Imaging in Philosophy, Science, and the Arts. Proceedings of the 33rd International Ludwig Wittgenstein Symposium in Kirchberg, 2010, Frankfurt: Ontos Verlag 2011, 305-40.

“The central aspect of Peirce’s doctrine of diagrammatical reasoning is the idea of using diagrams as tools for making deductions by performing rule-bound experiments on the diagram. Famously, Peirce distinguished between two classes of diagram proofs, ”corollarial” and “theorematic”, respectively – a distinction he himself saw as his first major discovery. As opposed to the simpler corollarial reasoning with diagrams, theorematic reasoning concerns diagram experimentation involving the introduction of new material.” Simulation seems corollarial?

“To Peirce, deduction and mathematical reasoning are one and the same. Mathematics is defined by two things, … mathematics is the science that draws necessary conclusions. Peirce’s own addition to this doctrine pertains to status of the subject matter of those necessities: the object of mathematics is hypotheses concerning the forms of relations. All mathematical knowledge thus has a hypothetical structure: if such and such entities and structures are supposed to exist, then this and that follows.”

“To Peirce, deduction and mathematical reasoning are one and the same.” Or as I like to say, math is thinking, thinking is math. If I have three errands to do in three different places and take a minute to decide what order to do them in (if I get the ice cream first, will it melt while I pick up the dry cleaning?), I am doing math. (Maybe poorly.)

I also agree that it is at least partly if not largely empirically based (i.e., you can play around with equations and discover things), and involves trial and error, although I got into trouble stating that on Dr. Peter Woit’s blog. (He claimed in a presentation slide that math is different from science because it is not empirical.)

Jim:

1. The argument has been made that for something to be “math,” it has to be subtle in some way. From Littlewood’s Miscellany:

2. Can you point us to the discussion from Woit? I did a quick search and found this from his blog:

This seems to show that Woit does see math as empirical.

Wow, going back through years of posts trying to find the one I commented on turns out to be a big job. I haven’t found the right search phrase for it yet, or else skipped over it, but I think I did find another post that referenced the same set of slides:

–copy/paste begins–

https://www.math.columbia.edu/~woit/wordpress/?p=8288

Slides from my talk at Rutgers are now available here. (link) …

Comments:

Jake says:

February 5, 2016 at 1:36 pm

Reading thru now, got to page 15 “There is one science that does not rely on empirical testing to make progress: mathematics.” and aren’t those fighting words at an institution that employs Doron Zeilberger?

…

Low Math, Meekly Interacting says:

February 7, 2016 at 2:43 pm

Is calling mathematics a “non-empirical science” also intentionally provocative? There are quite a few learned people who would consider maths a separate field and feel mathematicians have a different métier from that of scientists. Furthermore, they would say maths and the natural sciences are all the better for this differentiation, while acknowledging the former’s “unreasonable effectiveness”. I’m on the fence about this distinction myself, and just wonder why you favor looking at it as a “science” vs….well, something else.

Peter Woit says:

February 7, 2016 at 2:52 pm

LMMI,

Yes, that’s intentionally a bit provocative, I’m well aware that many if not most people have a different view of mathematics. But from the “radical Platonist” point of view, mathematics is not different than other sciences, it is the science of the study of certain kinds of objects and their relations, objects with a deep connection to the physical world. The only thing different is the role of empirical experiment (arguably there are also “experiments” in math, e.g. numerical calculations checking examples of a number theory conjecture, but these are “non-empirical” in the sense of not measuring something about the physical world).

–copy/paste ends–

In the post I commented on, I objected to the same presentation statement, and gave several examples I had collected of mathematicians observing some pattern (such as the conjecture about characteristic numbers of elliptical functions) and later finding proofs about it. Dr. Woit’s reply was that mathematics was not an experimental science and my comment was off-topic. (Fair enough.) However the next commenter added the following which I copied to add to my list of examples:

From a comment at Peter Woit’s “Not Even Wrong” blog: (wish I had copied the blog link too)

“Mathematical ideas originate in empirics. But, once they are conceived, the subject begins to live a peculiar life of its own and is … governed by almost entirely aesthetical motivations. In other words, at a great distance from its empirical source, or after much “abstract” inbreeding, a mathematical subject is in danger of degeneration. Whenever this stage is reached the only remedy seems to me to be the rejuvenating return to the source: the reinjection of more or less directly empirical ideas.”

–John Von Neumann

(Hope that is enough to establish I wasn’t lying. If not, I will continue my search.)

P.S. I respectfully disagree with Dr. Littlewood. Loved his “Littlewood’s Law of Miracles” though. There is no question that higher mathematics is on a different plane, as a concert pianist’s music is to my guitar fumblings, but we’re both making music (arguably); and even monkeys can add and subtract numbers up to a total of about eight. Would Dr. Littlewood say that a jump of two inches is not a jump even if it is all one can do? Seems a bit harsh.

P.P.S. I experimentally found that one can search for posts with certain commenters, such as “Low Math” and my username generates no hits. I now must assume Dr. Woit must have deleted all my comments, and I will never find them. It was just an opinion.

Jim:

The Neumann quote is interesting, but I think Neumann’s missing something here by contrasting “empirics” with “aesthetical motivations.” Ultimately, aesthetics are empirical too, in a sense. There is no abstract “aesthetics”: aesthetic considerations come from some combination of human biology and history. We can see this if we consider operationalizing aesthetics by, for example, doing an opinion poll of leading mathematicians. If a mathematical idea wins in such a poll, it could be said to be aesthetically more pleasing. But that’s an empirical judgment!

Woit is right – math is not empirical but rather only about abstract representations as for instance indicated in a diagram.

However that does not preclude experimenting on the abstractions to discover surprises about _them_.

“Math” is deductive, “metamath” is empirical?

Godel’s theorem shows us that there are always true facts that we can’t prove in our system. Which things we decide to axiomize is actually both empirical and political. We “need delta functions” to make QM work… so we axiomize some stuff that makes delta functions a new valid type of thing. That sort of stuff.

My favorite example is “we need infinitesimal numbers to make calculus into algebra” so we get at least 3 major constructions of infinitesimal numbers: Abraham Robinson’s ultrafilters, Edward Nelson’s Internal Set Theory, and Smooth Infinitesimal Analysis. Each of which drives forward a lot of new mathematical “technology”.

Or does the “metamath” just enable experiments on the abstraction that otherwise the abstraction doesn’t allow – such as the axiom of choice allowing choice by “saying” one can choose an element from a set that is not enabled by the abstract definition of the set itself.

Empirical to me is about connecting to the existent reality beyond our direct access…

I would say that most of what is good and interesting about math came about directly or indirectly out of a desire for some description of some real world thing. This is fairly obvious for things like counting numbers, rationals, real numbers, calculus, differential equations, fourier analysis etc but it’s also true of things like turing’s halting problem and godel’s incompleteness theorem and the lambda calculus and type theory and coding theory and probability and combinatorics and such.

Sometimes math makes progress by building abstractions of more concrete problems but usually unless the abstraction goes on to open up some new application, the field stagnates. Applications drive math forward because they provide the justification for people to spend time and effort on what would otherwise seem “worthless” to the outsider and hence no money would be available to pay for the required quantity of coffee.

Daniel:

I’m not a big fan of number theory but lots of people think it’s really cool, and things like Fermat’s last theorem and Goldbach’s conjecture seem pretty far from any realm of application. I mean, sure, everything’s connected to everything, and there’s codebreaking, so, yeah it can be applied, but the applications aren’t really the motivation in this case. For number theory, it seems more that the motivation is like climbing Mount Everest or going to the moon: “because it’s there.”

Andrew said,

“For number theory, it seems more that the motivation is like climbing Mount Everest or going to the moon: “because it’s there.””

Yup, that’s pretty much a good description of a lot that goes on in doing math — you think something might be true, and you can’t pull yourself away from trying to prove it and/or trying to find a counterexample. But then, there is sometimes another twist: you might get a proof (or read someone else’s, and think, “There’s got to be a nicer proof,” so you have a new obsession.

I think progress in science, math, and econ all come mainly from what people think is interesting, not what they think is useful. To be sure, what makes something interesting is often that it would be useful, but the motivation is that it’s interesting. That’s why we in economics don’t spend as much time as would be socially useful in collecting data.

Though, by the way, some authors who just wrote a working paper on whether Hindu judges are biased against Moslems in India, and vice versa (it turn out they are not) have just this past week said they’re posting their dataset of 80 million or so cases. A good move!

“I suppose I have a bit of “pedagagy recapitulates phylogeny” attitude where I want the cognitive experience of being in class to mimic the cognitive experience of doing mathematics as closely as possible, since the latter is a cognitive experience most of the students need introducing to.”

“Pedagagy recapitulates phylogeny”

I like that. My father was a geneticist, so I know about “Ontogeny recapitulates phylogeny,” which has an even nicer ring to it, but “Pedagagy recapitulates phylogeny” might be more useful. Is it original with you, Jordan Ellenberg?

I just made it up that second while writing that email but yes it’s intentionally a tweaked version of the ontology phrase.

We actually discussed a related topic on this blog a few years ago!

Since there seems to be some interest in quasi-concavity and optimization here, may I advertise a paper of mine as an example of the Lakatos process that bridges econ and math?

I’d long had a conjecture that a function being quasiconcave was the same as saying you could take it and blow it up with a monotonic transformtion to be concave. (The ultimate real idea is that if it’s quasi concave, it has a single peak, but we take unique maximand as being the conclusion of the theorem and make quasi-concavity the assumption, tho we could run it the opposite way too.)

I told a geometry professor, Chris Connell, my idea at coffee after church one day, and he suggested we work on it together. Immediately, we realized that we had to exclude the case of a plateau at the single peak. The other conditions we needed were basically that unless the function is monotonic, so the peak is at the min or max x-value, it can’t be “too flat” or “too steep”. Chris says that the most interesting thing mathematically is a more subtle condition: the derivative can’t change too quickly– there’s a bounded variation condition. (See the “magnifying class” style Figure 8 at http://rasmusen.org/papers/quasi-connell-rasmusen.pdf, my favorite diagram in any of my papers– economists never get to use sine waves). We would never have thought of that except for having to do the proof.

The main points come up in one dimension, but we generalize to geodesic metric spaces– infinite dimensions, fractals, graphs, etc. And we do the special case of differentiable manifolds, though we then discovered it has been essentially done already by somebody else. We also found that it was too mathy for econ journals, so we cut out almost all the examples, cut out the less elegant extensions, shifted from Lakatos style to laconic math style, and published in a math journal instead. The final, more correct but less readable version, http://rasmusen.org/papers/quasi-short-connell-rasmusen.pdf, is in the Journal of Convex Analysis.

So it’s a great illustration of Lakatos’s points.

Eric:

Out of curiosity wha year is that paper. Many years ago I had some papers on stochastic optimization of consumption and savings models (or harvesting models if you will), that relied too heavily on concavity than we would have preferred. At one point tried extending to quasi-concavity but the math was beyond me (or perhaps they couldn’t be extended). I don’t now I still have the math to understand your paper nor to try to re-visit my old results, but a quick skim suggests it could have been useful, if only I had had a time machine.

It was published in 2017, but we probably started it in 2010. The long version is not super-hard. I liked the paper because I could sort of explain it to my teenage children, even though getting it all rigorous required the skill of my mathematician co-author and I was barely able to hang on as he explained it to me.

The big bugaboo in getting results in dynamic stochastic optimization is you have a one period reward function with certain properties, and you have a transition probability function with certain properties, and the problem was getting the properties preserved (or known) as you compose the functions through time. Even with a linear return function, if the transition function was quasi-concave I couldn’t prove that properties of the expected return were preserved (if this doesn’t make sense, if the reward function is linear say, and the transition function concave non-decreasing, then the expected reward function is concave through time).

My most remembered exposure to this was an algebraic topology final in 1965. five short answers and five statement: prove or give a counterexample. On one, I went through at least four cycles of proposed proof, counterexample to proof. I wonder if the professor

had seen Proofs and Refutations and it triggered this. Perhaps not. There are lots of counterexamples in topology. I don’t remember too much of it now. I quit topology when a professor in a seminar started drawing pictures in 5-dimensional space and acting like he could see it.