An obvious fact about constrained systems.

 

This post is not by Andrew. This post is by Phil.

This post is prompted by Andrew’s recent post about the book “Everything is obvious once you know the answer,” together with a recent discussion I’ve been involved in. I’m going to say something obvious.

True story: earlier this year I was walking around in my backyard and I noticed a big hump in the ground next to a tree. “This hump wasn’t here before”, I thought. I looked up and saw that the tree, which had always been tilting slightly, was now tilting a lot more than slightly. It was now tilting very substantially, straight north towards our neighbor’s house! The hump in the ground was the roots on the other side of the tree being pulled up from the ground.

It was a Sunday but I immediately called our tree guy and left a number on his emergency line. (Did you know tree guys have emergency lines? They do. They’re like plumbers: a significant fraction of their calls are urgent). Then I called our neighbors.

The tree guy came out and said something I already knew: this tree is well on its way to falling down. He immediately had his crew come out with heavy ropes to anchor the tree to the trunks of two other trees as a temporary measure. See the diagram above (north is diagonally to the left). A week or so later, his crew came out and cut down the tree piece by piece. They started from the top ;)

We don’t know for sure that the ropes were needed to prevent the tree from falling, but let’s assume they were. In that case, in the absence of the constraining ropes the equilibrium state of the system is easy to see: the tree would have ended up lying on the ground with its top facing north.

Does that mean that if we relaxed the constraints, that’s where the system would end up?

Suppose we slackened the rope on the left side of the diagram? (It’s sort of nice that we are talking about ropes, so we can literally ‘relax’ the constraints). I hope you can see that the tree would fall angled to the right (east) because of the pull of the remaining rope. The tree would end up on the ground with its top pointing northeast, not north as would have occurred if the constraining ropes weren’t there at all. And once it’s on the ground, we could slacken the other rope and the tree wouldn’t move. We would have relaxed both constraints, but we would not have reached the same equilibrium condition that we would have reached if the system had never been constrained.

And of course the opposite occurs if we slacken the rope on the right: now the tree ends up with its top to the northwest.

Depending on how we relax the constraints we could end up with the top of the tree facing northwest or northeast or anywhere in between.

The point is that you can know the equilibrium state the system would be in if it weren’t constrained in the first place, but that doesn’t tell you very much about the state the system will be in if you remove the constraints. If you want to predict the ultimate equilibrium state of the system, you need to know the path to equilibrium as you relax the constraints. Most real-world systems are like this, even very simple ones (and not just physical systems). Indeed, real-world systems of interest don’t come much simpler than the one described here.

Calling something a ‘constraint’ reflects the way we are thinking about the system, it’s not something inherent in the system. The ‘constraining’ forces exerted by the ropes that are holding the tree in place are just forces like any other, whether we think of them as constraints or not. In this example there’s also the gravitational force that is pulling down on it and the tensile force of the roots that is stopping the bottom of the tree from sliding. When you ‘relax the constraints’ you are just changing the forces on the tree. The forces that act on the tree determine where it moves.

Hey, I warned you I was going to say something obvious.

91 thoughts on “An obvious fact about constrained systems.

  1. It seems of interest to distinguish between state functions, which by definition do not depend on the path between equilibrium states, and process quantities, which do.

    • I was originally going to use a thermodynamics example, and bring in entropy and conservation of energy, but I decided to stick to a simpler point.

      There’s a ton that could be said about All of this stuff. When I was an undergrad I was fascinated by the connection between Newtonian, Lagrangian, and Hamiltonian ways of looking at the same system. Actually I still am, although I don’t think about it much anymore. I still think the “principle of least action” is totally cool.

  2. well, your point is not obvious and the labored tree/rope metaphor unhelpful IMO

    how one defines/limits the temporal “system” under study seems relevant; a natural backyard tree is one system — a backyard tree with ropes is a different system

    • How about this: if you had 37 ropes, and had to relax these constraints by running around untying one at a time in a particular random order, You couldn’t figure out where the tree would be without also knowing something about the sequence in which you untied the ropes. There are many many final states compatible with being the finale state of this process, even though there is only one state the tree would go to if you could simultaneously relax all the constraints (chop all the ropes at the same time?)

      In a real world highly constrained system, it’s not in general possible to use the “unconstrained system” as an approximation to what would happen as you marginally unconstrain it.

    • I think my point is so obvious that you didn’t even realize it’s my point! I’m saying the obvious thing that you just said, which is that a system of trees and ropes is not the same as a system of trees without ropes, and that, more generally, if you want to predict the behavior of a system you need a model of that system, you can’t model a different system!

      This is indeed totally obvious. And yet I am not 100% sure it is recognized by people who haven’t thought about it. Once you think about it it’s obvious.

    • I’ve always been a little but unhappy with this saying, in spite of sometimes finding the message very apt. The thing is, you can push a rope. Give me a rope and I’ll show you. What you can’t do is push something with a rope. But that doesn’t make a very good saying.

      I’m also unhappy with “head over heels.” Head over heels is my normal orientation!

      And “you can’t have your cake and eat it too.” Of course you can. Indeed, the only time you can eat your cake is when you have it. (I know, I know, it really should be ‘you can’t eat your cake and have it too’, and the point is that once you’ve eaten your cake you can’t have it anymore. It still bugs me a little, even though I say it.)

      I recall when I was in third grade I was puzzled by a sticker next to the light switch in my classroom, “Waste not, want not.” I didn’t know that quasi-archaic meaning of the word ‘want’, and with the conventional current meaning the saying makes no sense.

  3. Phil:

    To connect with an earlier thread:
    – If the yimbys had their way, you’d have no problem with that tree: it would’ve long ago been uprooted and replaced by an apartment building. Or maybe it would’ve been strengthened and made into a three-bedroom treehouse.
    – If the nimbys had their way, you’d also have no problem with that tree, as your house would never have been built. If a tree falls in the backyard, you’re aware of it. But if a tree falls in the woods, …

  4. This was the dystopia Ronald Reagan warned us about.

    > Calling something a ‘constraint’ reflects the way we are thinking about the system, it’s not something inherent in the system. The ‘constraining’ forces exerted by the ropes that are holding the tree in place are just forces like any other, whether we think of them as constraints or not.

    This is a good point, and one we should especially keep in mind when discussing social systems where the “unconstrained” system is still usually full of rules and institutions that come from the same process as “constraints.” It also seems to suggest the question of “what happens to the tree when we remove all constraints” is not well-defined. Removing “all constraints” is meaningless. More generally I think what we’re interested in is “what happens when we remove this specific constraint (or set of constraints)” or “what happens when we change this specific parameter (or set of parameters) in a certain way.”

    • Sam, I agree it doesn’t mean anything to talk about removing ‘all constraints.’

      But my point — which I know you got — is that even if we are careful to define which ‘constraint’ we mean, and what we mean when we talk about removing it, we can’t just look at where the system would be if that constraint had never existed. In the case of the tree it is perfectly fair to say “if not for the ropes, the tree would be lying on the ground facing north, therefore if we remove the ropes that’s where it will end up.”

      Someone is now tempted to say that if I simultaneously remove both ropes in this example the tree will in fact end up in that state. That’s true in this static system but dynamic systems don’t work that way. .

      • > …even if we are careful to define which ‘constraint’ we mean, and what we mean when we talk about removing it, we can’t just look at where the system would be if that constraint had never existed.

        In general, yes. In fact, even if comparative statics is often, urban systems may be a counterexample. There has been some recent work where researchers look at how cities change after some plausibly exogenous shock (the division of Berlin, in one example) and it does look like there are multiple equilibria. Berlin post-Wall isn’t identical to Berlin pre-Wall. I should add with more temporary shocks, it looks like cities do more or less return to their previous state, so sometimes even in cities, the assumption there’s a locally unique and stable equilibrium seems fine.

        I’m confused by your story of San Francisco housing because there are many moving parts while at the same time leaving a lot out. For example, increased demand for service workers makes service workers somehow appear in the city, but there’s no discussion of the wages of the service workers. New high-end housing attracts rich people, and it seems the assumption is “low-end” housing would not attract them? (And it’s not how one could make brand-new apartments in one of the most popular cities in the US “low-end.”) It’s also not really clear whether you’re describing new housing in the current environment (in the tree analogy, I guess this would be the pull of gravity on the tree even with the ropes in place) or the removal of one of the constraints (say, what happens when we drop rent control, or make the environmental impact studies faster and cheaper).

        Saying “we shouldn’t be surprised if we see results different than what overly simplistic models predict, because the actual world is complicated” is uncontroversial. Saying “we shouldn’t be surprised if we see *this specific result* which overly simplistic models don’t predict, because the actual world is complicated” is less so.

        • Sam: I agree with you. Phil needs a more detailed model of what he thinks is going to happen. And he needs to consider which moving parts he will model and which he won’t and why he won’t model them. Or maybe it’s all just too much, he has other things he needs to do.

          I’ve given a fairly detailed account of how I think adding liquidity and turnover will ratchet up prices on units. I don’t think it’s that controversial, rent controlled houses are by definition below market, so if we add probability that they’ll turn over, they will reset to market… I mean, how controversial is that? The more interesting question is “so what?” maybe people are generally better off after the prices ratchet up, because at least they can squeeze 4 people into a 2-bed whereas before some rich guy had one person in that 2 bed… or whatever. Even the concept that we could unambiguously measure a global “goodness” function is super suspect. One of the reasons we like markets is they let individuals optimize their own preferences.

          I just don’t think the ACS has really enough details to determine details of the distribution of crowding in SF. Even 8000 observations over 5 very dynamic years… isn’t quite enough to get a good idea of how many rent controlled 3 bedroom houses with 1 person in them there are and soforth. But, it’s a plausible place to start at least.

          The other thing I should add, is that in reality, dollars are almost never the right unit to measure things in, dollars/income removes inflation and removes the fact that sometimes expenditures and wages move together. All real-world questions should be made dimensionless before analyzing.

        • I agree with both of you! I’m not going to talk about my conceptual model anymore. If I ever come up with a complete model, even a simple one, I will let you know.

          Of course the point I make in the present post stands on its own. But to connect it to my previous posts on urban development: it seems possible to me that, as with the tree, if you chance the constraints in different ways maybe you get different final states. Perhaps there is one sequence of removing housing restrictions in a specific order leads to a state in which only really rich people live in SF and all the lower-income people live in the East Bay, and perhaps there is another path in which there is a lot less difference in the income distributions.

          Or perhaps the housing market is one of the exceptionally rare systems in which it doesn’t matter in what order or what manner you change things, the different areas have exactly the same relative income levels and standard deviations of income (and higher moments too). That would really be something, wouldn’t it? But hey, could be, what do I know.

        • > Of course the point I make in the present post stands on its own. But to connect it to my previous posts on urban development: it seems possible to me that, as with the tree, if you chance the constraints in different ways maybe you get different final states. Perhaps there is one sequence of removing housing restrictions in a specific order leads to a state in which only really rich people live in SF and all the lower-income people live in the East Bay, and perhaps there is another path in which there is a lot less difference in the income distributions.

          Maybe! Lot’s of stuff could happen. I agree the present post does stand on its own, as a message about prediction more generally.

          > Or perhaps the housing market is one of the exceptionally rare systems in which it doesn’t matter in what order or what manner you change things, the different areas have exactly the same relative income levels and standard deviations of income (and higher moments too). That would really be something, wouldn’t it? But hey, could be, what do I know.

          I’ll refrain from the obvious, snarky response to “what do I know,” a phrase which I think is to me what “push a rope” or “head over heels” is to you! (end apophasis)

          As for the rest of the paragraph, there is a big gap between “this simple model is too simple to really predict what will happen…” which may or may not be true, to “…and therefore, we should expect the opposite of what it predicts.” I hope you do try to work out a complete model, or at least work out some of the ambiguities in the current one. I’d be interested to see it. (Same goes for Daniel, if he comes up with something.)

      • Whoops, I somehow messed something up, above. I was trying to say:
        It is perfectly fair to say “if not for the ropes, the tree would be lying on the ground facing north, but wrong to say that therefore if we remove the ropes that’s where it will end up.”

  5. Impotrant point you didn’t stress is that the system (with no constraints) has multiple equilibria. If it didnt, removing constraints would be a question of safety or efficiency, not the final state.

    • Yes, good point! I did consider making it, when I was going to use a thermodynamics example. I was thinking of talking about taking an insulated box of water into a freezer, and removing insulation from one wall or another. It doesn’t matter what order you remove the insulation, you end up with a uniform block of ice.

      But if it is salt water rather than fresh water, then the salt gets concentrated in the areas that freeze last. If you remove insulation from the left wall first then you end up with a pocket of brine on the right side, and vice versa.

      Anyway, yes, you’re right, there are systems with a global minimum and no local minima in which the end stare is path-independent.

  6. Phil, because the analogy is so apt, and because Yay Physics, I’m going to post this Thermo idea here, which maybe is the one you were thinking of:

    We have a Piston with volume V, N molecules of ideal gas, and temperature T.

    The piston has a valve which is closed, fixing N, a latch that holds the plunger fixing V, and a bunch of insulation fixing NkT. Under these conditions:

    P = NkT/V

    Now, simultaneously someone places the thing in a high temperature thermal bath of additional ideal gas molecules, undoes the latch, opens the valve and removes the insulation. What happens to P?

    You can discuss the analogy to the rental market if you like, or you can just marvel at how such a simple system can give pretty interesting dynamics that depend a lot on very specific things like the molecular density of the thermal bath, the viscosity of the thermal bath, the size of the opening of the valve, the conductivity of the walls of the piston, the friction on the piston slider…

    • specifically, we need to talk about what happens to all of P,V,N,T and we should note that there isn’t a single equilibrium, because once everything thermally equilibriates, if we slide the piston, we just suck in some additional molecules and the new position is also in equilibrium. So equilibrium theory won’t tell us where we end up.

      • Not the example I was thinking of, but a perfectly fine one. Certainly more similar to the rental market than what I was considering. I deliberately decided not to try to make up a physical system to match the market, figuring then people would get caught up in looking for the differences and thereby miss the main point.

        But yeah, what you said.

        You are of course free to make whatever comments you want on this thread but I’m going to try to stick to the more general point here. I will get back to the housing market someday, when I have something concrete to contribute.

  7. > When you ‘relax the constraints’ you are just changing the forces on the tree. The forces that act on the tree determine where it moves.

    Loosening the rope surely will result in the tree going down, and tightening the constraint will pull the tree up. Relaxing the housing supply constraint will bring prices down.

    • I have a cup with a lid. Boiling water is pouring on the lid. Relaxing the lid constraint will increase the temperature of the water in the cup. Relaxing the building constraint lets more money into SF. Why couldn’t this lead to higher rents. I’m not saying it does but it is possible, there are conceivable states of the world where that happens. Suppose Bill Gates rents an apartment and proceeds to hand out money door to door… People become willing to pay a lot to be behind one of those doors. Just to take an extreme example

      • Before I decided to deliberately avoid a system that tried to look like the housing market, I toyed with something sort of like your boiling water example. I was thinking of something with a Maxwell’s Demon. But the simplest systems were nothing like the housing market, and the more complicated ones were wat too complicated and too strained.

        If I were looking for a really simple housing example, your Bill Gates example might be a good one. If Bill Gates chooses….eh, you know what, I’m going to stick with my decision not to get into that stuff on this comment string. It is a thought-provoking example.

      • Gates handing money out looks like rising wages, something absent from Phil’s model that some of us commented could be a mechanism puting upward pressure on prices (but is hard to see why it would reverse compñeteñy the first order effect).

        A think a better analogy is that YIMBYs think naively that loosening the strings will let the tree incline further, but they are ignorung that in the new position it eill get more sunlight and get warmer and the string will get warmer and it’s made from a poñymer that contracts with temperature and the tree will go up.

        • The is why I tried to deliberately choose an analogy that couldn’t possibly be directly analogies to the housing market …or so I thought.

          The path to equilibrium affects the final state of just about any system. That’s the broader point I’m trying to make.

        • We agreet on that, and we agree that it is obvious. I don’t know how this has any relation to the housing discussion. The controversial point is what would make the tree recover the vertical orientation. It seems some additional force, not a direct consequence of the constraint relaxation, would be required.

        • Carlos,

          What you’re talking about is moving the tree back to vertical and thus reestablishing the old equilibrium, before it was destabilized and we stopped it from establishing a new equilibrium. If we insist on a direct analogy with the Bay Area housing market, that would be like forcing all of the high tech companies out of the area and somewhow going back to a point when the reason people weren’t building new housing units was because they couldn’t make a profit at it rather than because they weren’t allowed to do it.

          The constraints (the ropes) aren’t preventing the tree from reaching its old equilibrium of standing vertically. The tree isn’t “trying” to stand back up. But the constraints have prevented the tree from reaching the equilibrium state it would have reached if the ropes weren’t there. If we now relax the constraints, it will reach some other equilibrium that could be quite different from that one.

          The constraints on the housing markets aren’t preventing the housing market from reaching its old (notional) equilibrium. But constraints — restrictions on building — have prevented the market from reaching a new equilibrium in which new housing units were built in particular places and at particular quality levels etc.

          If (as you seem to desire) we insist on a direct analogy with the tree: no matter what order we relax the ropes, the tree will get lower; but whether the tree ends up pointing northeast, northwest, or somewhere in between depends on the order. “Similarly” (a word I use with reluctance), I think if relaxing restrictions on building in the Bay Area the prices will get lower, but that the order of the relaxation — relax in SF first or in the rest of the Bay Area first, for example — might affect the spatial distribution of housing prices that we end up with.

        • As you say in your last paragraph, the point of my analogy is that if you relax the constraints that prevented the tree from falling down it will fall down. You may say there will be an cascade of effects that will cause the tree to stand up, but you need some kind of fundamental change in the system. You may say that building more will result in structural changes that lead to a new equilibrium were prices are higher. But unless you can provide a detailed mechanism of how that would work this is just wild speculation.

          I also don’t know (and others have raised this point already) why your conclusion “building more is SF will make prices in the whole Bay Area go down, but prices in SF will increase” doesn’t apply to neighbourhoods in the city “building more in this neighbourhood will make prices in the whole of SF go down, but prices in this neighbourhood will increase” or even blocks “building more in this block will make prices in the neighbourhood go down, but prices in this block will increase”.

        • Carlos, you say “You may say that building more will result in structural changes that lead to a new equilibrium were prices are higher. But unless you can provide a detailed mechanism of how that would work this is just wild speculation.”

          For crying out loud. I am not saying that if you build more housing the price of housing will be higher! I have said this again and again, the first time being paragraph 5 of my original post. If you build more housing, prices will go down on average.

          I think housing prices will go down on average, but that the spatial distribution of more expensive vs less expensive housing will also change, so as to get more expensive where you are adding expensive housing.

          Perhaps I would indeed predict this to happen in individual neighborhoods if there is something like a discontinuity in travel times between neighborhoods, or some other reason that there would be a discrete premium associated with the neighborhood, as is the case in SF vs the rest of the Bay Area.

          I understand that nobody agrees with me, and I acknowledge that I might be wrong. What I can’t understand is how everyone else can be so damn sure I am wrong, when we are talking about relaxing constraints on a constrained system for which nobody seems to have a model.

        • Phil:

          I agree that it doesn’t make sense for people to be damn sure you’re wrong. It’s really hard to know. But I think there is a weaker but still powerful argument that these people are making, which is that there should be the presumption that you’re wrong.

          I’m just flying by the seat of my pants on this one, but I think the general argument goes like this: in a simple system, you’d expect building more would lower the price; yes, different things can happen in different locations, but we go with the first-order effect as a starting point. Given that you’re not offering a strong data-based argument, the presumption would be to stay with the first-order result until you can demonstrate otherwise.

          Again, this doesn’t mean that you’re wrong; it’s more that, if you’re right, it’s kind of a bank shot, and there’s no particular reason for an outsider to be convinced by your argument without more evidence.

          Or, to put it another way, “Econ 101” is not correct, but it’s the default.

          Does that make sense?

        • Andrew,
          I guess my issue is that I am not sure that this is really an Econ 101 system…or even an Econ 301 system. If it is, then sure, OK, “this is a lot like systems people have studied before and we know second-order effects tend to be small or at least not large, so we assume that’s true here too unless there’s strong evidence otherwise.” Yes, that makes sense.

          But the systems we studied in my Econ 101 class were never this far from the free-market equilibrium. Sure, we looked at the effect of taxes and so on — a little bit of friction in the system — but nothing like this. Does that matter? I _think_ it matters. Maybe it doesn’t matter. If economists have looked at systems like this and found that they behave pretty much just like equilibrium systems do — or else that it doesn’t matter how you release the constraints that have kept them far from the free-market equilibrium, they just head straight towards that equilibrium in some sense — then OK, I buy it. Otherwise, not to make too much of the tree analogy, but if someone says “Unless you have convincing evidence that the ropes affect the tree, the default should be that they don’t”, well, no, I think that’s sort of ridiculous.

          Aha, but I can see where I am leading myself. I have a conceptual model, but not a formal one, that makes me think the system should respond in a particular way. Economists have a conceptual model that makes them think it should respond in a different way. Why should anyone (including me) believe my model rather than theirs? They’re the experts, after all!

          So, OK, believe the economists if you have to believe anyone. If you have to place a bet, bet that way. But until someone has an actual model that is appropriate to the system, I am going to remain very skeptical of what anybody tells me on this. If economists were really able to make accurate predictions about this kind of thing, they wouldn’t need two hands.

        • Phil, it’s clear that you say thay building more in SF will result in lower prices in the Bay Area and higher prices in SF. What is not clear is why do you think so (and what do you mean precisely by higher/lower prices, in fact). You offered just a very broad idea without much support in form of theory or data.

          You say that there is a decline in prices but it is not local, without explaining how the first order decline can be dominated by other effects. And without explaining why the remote decline in prices happens at the city scale (prices go up in SF, but decline in Oakland) and not at a shorter or longer scale.

          The same arguments that you’re giving could also support the alternative claim that building in SF will result in lower prices in California but higher prices in the Bay Area because so many rich people will move into the Bay Area, etc, etc.

        • Andrew, I think it’s more than that. There’s an aspect of poor communication, but also inappropriate model assumptions that is making Phil’s opposition inappropriately certain that he’s wrong.

          What’s clear, is that rental prices have been going up continuously for something like 5 years thanks to a tech bubble. Here is SF’s case-shiller housing price index:

          http://us.spindices.com/indices/real-estate/sp-corelogic-case-shiller-san-francisco-home-price-nsa-index sale prices and rents are obviously related but not perfectly. Still, the sale prices have nearly doubled since 2012. I’ll put up something on my blog for rental prices from the ACS.

          Phil explicitly states that he doesn’t know when the tech bubble will pop, but because of that he’s assuming it will continue past the time period he’s interested in, and his opposition has accepted that thought-experiment assumption for the most part.

          Under this external Tech-bubble we should be expecting rents to continue to ratchet up! That needs to be the initial presumption.

          Now, we have the idea of expanding the building of market rate apartments, but we’re not talking about doubling the number of apartments in SF, we’re talking about a factor of 1.04 or something like that, maybe 12000 apartments instead of 3000 in a typical year. Of course, maybe YIMBY politics would like 65,000 apartments or 250,000 apartments, but we have to be serious, they’re not going to get that in the next 2 or 3 or 5 years.

          Now, we get the assertion that this will make prices come down in the low-end quality range. Not that it will make them go up less than they would have, because I think Phil has already stated that this is conceivable but not what he’s heard as the rhetoric. The assertion is nominal dollar prices for rents in not very fancy places will decline.

          It seems to me that the first order effect simply can’t explain that. That’s a big stretch to imagine that building ~12k extra high end apartments will reverse the external driving trend and make prices go down, because the external driving trend is doubling in 5 years, a 15% per year growth rate! How is a 4 percent per year increase in supply at the high end quality going to reverse a 15% per year externally driven increase in prices?

          So if you’re going to talk about the default assumption that the first order effect will be something… I think we need to talk about magnitude not just direction. Yes, the first order effect would be that the rate of change would decrease. But decrease enough that it goes below zero? That seems hard to believe given the high external forcing.

          I think Phil’s next point is, high external forcing doesn’t act just on Tech employees. The influx of Tech employees produces a high external forcing for job opportunity for lower-skilled service workers too. So, we’re ramping up this forcing function by bringing in additional Tech workers. The first order effect of bringing in external Tech workers on the service worker market is to ratchet up demand for housing for service workers. (Note, it occurs because evidently the service workers were making less somewhere else, but it doesn’t even need to be that SF service worker wages go up, if we just make more service worker jobs at constant wages, and those wages are higher than you’d get in Emeryville… then that ratchets up demand too).

          From the perspective of jobs, it’s clear that the number of job opportunities at constant wages will increase, and so either people will come into the area to take those jobs, or wages will rise until they do. Whether this drives service workers into or out of rental apartments is what’s more complex to figure out, but first order we’re talking about more demand for service worker apartments at constant prices. That demand thereby puts upward pressure on prices at constant supply and we’re talking about fairly constant supply, because the person moving to the fancy apartment is certainly not vacating an apartment affordable to a service worker, and a cascade has to go down the chain pretty far before it hits an apartment for service workers, and once it frees up an apartment “for service workers” that will ratchet up prices due to rent-control unlocking and reset, and so depending on what statistic we care about, this rent-control unlocking ratchet is really important too.

          So first order, in a service worker subset, we’re ratcheting up demand, in the same way that first order in a Tech worker subset, we’re ratcheting up demand. We can view these populations as reasonably separate, because their incomes are vastly disparate (Tech ~ $150k and Service around maybe $45k), they don’t compete for the same quality/neighborhood etc. So we can split the pool, and see a coupled-equation where demand for Tech drives demand for Service.

          Phil’s point is that he’s got a prior on the relative magnitude of these effects, and since he lives and works in the area, he’s got an informed prior.

          But what I’m going to call the Naive Economist viewpoint (and we’ve had some non-naive economists here and their viewpoint is typically different and more nuanced) is what I want to call non-dynamic thinking. They are not viewing this as a dynamic process such as an ordinary differential equation with external forcing and feedback… they’re viewing it as an algebraic equation for the crossing point of two functions that no-one knows the shape of, and early on they were viewing it as a homogeneous market for exchangeable apartments, also a poor modeling choice.

          So the way I see it, the presumption that Phil should be presumed wrong is based on false premises, it’s based on not modeling the dynamics, and for some it’s based on not being in the area and not having informed priors on the magnitudes of the components of the model…

          I don’t think Phil is definitely correct, but I think it’s a mistake to be pretty sure he’s wrong, or even to think that there’s a requirement of presumption that he’s wrong because it requires some “extraordinary” assumptions. The initial assumption of some kind of order of magnitude of 15% per year growth was built in and accepted at the beginning. (note, I think this is probably the weakest assumption, the bubble will pop!)

          So, dynamic thinking with order of magnitude estimates produces one model that predicts the kind of thing Phil is suggesting, and static equilibrium thinking ignoring external forcing functions etc produces another prediction. I know which one I think is more appropriate in a market increasing in price by a factor of 1.15 each year. It bothers me that a notional Naive Economist (say the average over the supply and demand type comments) takes the static viewpoint automatically, and gets to what I think is an over-certainty for what seems likely to be the wrong answer.

          please note, since this is a really long comment, I’ve tried to use some formatting to help you skim it… if it seemed heavy handed, sorry. I wish there were PREVIEWS for comments, but there aren’t

        • AG said: “Given that you’re not offering a strong data-based argument, the presumption would be to stay with the first-order result until you can demonstrate otherwise….
          Or, to put it another way, “Econ 101” is not correct, but it’s the default.

          Phil replied: “… So, OK, believe the economists if you have to believe anyone. If you have to place a bet, bet that way. But until someone has an actual model that is appropriate to the system, I am going to remain very skeptical of what anybody tells me on this. If economists were really able to make accurate predictions about this kind of thing, they wouldn’t need two hands.”

          Andrew has a point, but I’m slightly more inclined toward Phil’s reasoning. My questions: How good are the economist’s models at making accurate predictions? Are there any economic models that take rent control into account? Perhaps what is really needed it to pit two (or more) models against each other — make some predictions and then gather evidence to see which does better (or for how long, or under what circumstances?)

        • I said I would do something at my blog about the growth rate of rents, but the ACS data is top coded, and this top coding is a big deal in the SF data, from 2012 onwards is more than 4.5% of all observations, so it’s nontrivial to get any decent growth rates that aren’t dramatically biased downwards without running a big Stan model to impute rents based on some observed decay rate in the right tail of the non-top-coded data points or something.

        • Daniel:

          Your story about the SF housing prices reminded me that in 1994, I think it was, my girlfriend at the time and I were thinking of buying a house in Berkeley. My income was $50,000 at the time, and she wasn’t working, and we had $50,000 in savings, so the real estate agent told us that what we could afford a $250,000 house with 20% down and a $200,000 mortgage. She assured us that the bank wouldn’t lend us anything more than that. We looked at some houses but they were in blah neighborhoods and were neither nicer nor larger than the apartment we were living in, so we didn’t bother. I do remember looking at a nice house with a pool (!) in a pleasant neighborhood—not too far from where Phil lives now, in fact!—but it was $400,000 so couldn’t consider it. Who’d pay $400,000 for a house???

        • Andrew, there are literally two 22 year old people with an undergrad degree in Economics and a couple of extra courses in Python computer programming working for Uber earning as much as you do, bidding on that same house right now at $2 Million.

          ;-)

        • “The path to equilibrium affects the final state of just about any system.”

          GS: Well…except for all locations in the state-space that are “in” the basin of attraction of some stable attractor…no? From the basin, all roads lead to the same stable-state…no?

        • With respect to the state variables yes, but with respect to non state variables, no. So for example in a piston, PVNT are the state variables, and color is not, but if there are some various color dye packets that could be popped by the piston handle… then the *path* with affect the color

  8. What about variables that constrain the way system variables can interact but do not, themselves, interact dynamically with the system variables? For example, taking the kind of system in which I am interested (and so should anyone interested in behavior – really!), I can program a computer to arrange it so that a pigeon has access to grain for a few seconds if it pecks a Plexiglas “key” more than 120 s, on average, after a previous peck has produced grain (i.e., I can arrange a variable-interval 120 s schedule of grain presentation). The system consists of dynamically interacting variables. Two that should come to one’s alleged mind immediately are rate of key-pecking and rate of food-delivery. These variables are related dynamically – and, indeed, if they were not, the pigeon would not peck the key at all. But if I had programmed the computer such that the pigeon gained access to grain every 120 pecks, on average (i.e., a variable-ratio 120 schedule), the relationship between rate of key-pecking and rate of grain access would be different than in the variable-interval schedule. In both cases, though, the equilibrium state reached would depend, presumably, on the actual quantities during their interaction. So…the arrangement of the schedule is an independent-variable (it will, roughly speaking, be a “causal factor” in what transpires) but it is not a variable that interacts dynamically with the system variables (and there are, incidentally, others besides rate of response and rate of reinforcement – number of responses per reinforcer…call it “unit price”  and average reinforced inter-response time etc.). Instead, the schedule merely constrains the way the variables that do interact dynamically may interact in “arriving” at the equilibrium states.

    • As a general rule if the system behaves one way with the constraints and another way without them, then the order and manner in which the constraints are relaxed will affect the final state. I’m not sure I completely understand your example but I would expect that even if you always end up at a point where 2 minutes of pecking leads to 30 seconds of grain access, the amount the pigeon pecks and eats will still be somewhat dependent on the path through the parameter space during training. No pigeon will starve and perhaps none will eat to obesity and I’m tempted to say this might be a case where the path doesn’t matter much — won’t every bird just eat until it’s full? — but in fact I think the path will matter some. How much? No idea.

      I’m not sure if that’s what you mean.

      • I guess I was responding to: “Calling something a ‘constraint’ reflects the way we are thinking about the system, it’s not something inherent in the system. The ‘constraining’ forces exerted by the ropes that are holding the tree in place are just forces like any other, whether we think of them as constraints or not.”

        In your example, you are pointing to variables that interact dynamically with other variables (even though the problem is one of “statics”) and, thus, are just variables like the other variables. But in the examples I gave (and I’m sure that the issue pertains to arrangements other than schedules of reinforcement) the “constraining variables” are just that – they only serve to constrain the possible values taken by potentially important variables that interact dynamically with the other variables that constitute the dynamic system. For example, say I have a pigeon whose key-pecking has stabilized under a variable-interval (VI) schedule (pecks produce a few s access to grain when they occur some amount of time since the last food-delivery, and the time varies from food delivery to food delivery). I can measure certain variables like rate of response, rate of reinforcement, number of pecks/food-delivery, average reinforced inter-response time etc. and they will not vary systematically (i.e., the system is in equilibrium). Now, I change the schedule to a particular kind of variable-ratio (VR) schedule (a response produces grain access only if it is preceded by a number of other responses since the last food-delivery, and this number varies from food-delivery to food-delivery) in which the number of responses required per foo-delivery is equivalent to the average under the VI schedule (or the last equilibrium session under VI determines how many pecks are required etc.). [Sorry…you actually have to think carefully about tis to get what I’m saying.] Anyway…if you do this, what happens over sessions is that the rate of response (key-pecking rate) increases and the system eventually assumes a new state with its own values of the aforementioned variables. So…changing the schedule from VI to VR changed only the way the critical variables may interact (after all, the bird’s behavior could hardly change just because I reprogrammed the computer controlling the bird’s “work space”) – the bird’s behavior changes (and a new stable-state emerges) only after the bird is put in the reprogrammed space (operant chamber) and the critical variables undergo interaction. So…you could certainly say that “schedule-type” is a variable that exerts control, but the schedule type does not interact dynamically with any relevant variables. It simply constrains what can happen. And that is not merely a matter of the “way we are thinking about the system).

        • But of course, it *is* a matter of “the way we are thinking about the system” because the constraint variable doesn’t actually constrain the other variables, it actually gets compiled down to some machine code that alters the flow of electrons in the computer processor and memory, and these alter the flow of electrons to the solenoid which opens and closes the hatch…

          so, whether you think about the system as “pigeon and computer program” vs “enormous collection of biomolecules and micro-electronic machinery” changes how you describe the system.

          To a physicist that is very relevant, because the rope isn’t “a constraint” it’s a couple of moles of Nylon molecules acting together.

        • Glen,
          Thanks for explaining the “alleged mind” comment, and thanks to Martha (Smith) for prompting the explanation. I had passed it over as just a sort of routine deprecatory comment that suggests that some people are too dumb to have a mind, without having recognized the point that nobody has “a mind”!

          I may be misunderstanding something (indeed I probably am) but the distinction you’re making between different kinds of constraints seems artificial. You say “the schedule does not interact dynamically with any relevant variables. It simply constrains what can happen.” But if it contrains what can happen with some of the relevant variables, then it does interact with them, obviously! Maybe not ‘dynamically’ but ‘dynamically’ doesn’t matter: if the system behaves differently under one set of ‘constraints’ than under another, then where you end up could depend on how you relax the constraints.

          Perhaps I’m still talking past you, in which case I apologize.

        • Phil: Glen,
          Thanks for explaining the “alleged mind” comment, and thanks to Martha (Smith) for prompting the explanation. I had passed it over as just a sort of routine deprecatory comment that suggests that some people are too dumb to have a mind, without having recognized the point that nobody has “a mind”!

          GS: Yeah…that’s the joke…it’s a double-entendre.

          Phil: I may be misunderstanding something (indeed I probably am) but the distinction you’re making between different kinds of constraints seems artificial. You say “the schedule does not interact dynamically with any relevant variables. It simply constrains what can happen.” But if it contrains what can happen with some of the relevant variables, then it does interact with them, obviously!

          GS: No…it “acts” on the system…as I said – but it doesn’t “interact” with it. The difference is important in terms of scientific strategy. This will be sort of hard to understand because the dynamic system (the behavior of animals interacting with “consequent events” – most people are too involved thinking about the animal as what is under study rather than its actions upon the world which is what enters into the dynamics) in question is not one people think too much about…especially in the context of the study of schedule-controlled behavior in the laboratory (oh…except when it can be sold as “neuroeconomics” or game-theory is applied). Let me mention briefly the topic of my long-ago dissertation – the title was “Schedule spaces: an empirical and conceptual analysis.” What I did was a generalization of something Skinner (as in “B. F.”) did, though it took me a little while to realize it mathematically contained his stuff. I created a 3-parameter schedule that cast previously disparate schedule classifications on a continuum. The advantage is that if you do this, then there is a description of the behavior of animals (including us), under the impact of different schedules, that can be couched in terms of variables that can be directly manipulated to obtain meaningful functions. An attempt to do the same in terms of the dynamic variables requires the testing of mathematical hypotheses rather than a strictly “phenomenological approach.” Both approaches have their place. So…I mentioned VI and VR schedules. The former schedule arranges the response-dependent occurrence of some event (say food-delivery) based on the passage of time, whereas the latter arranges the occurrence of some event on the basis of the emission of some number of responses. So…it makes perfect sense to say, “well, this is the function that relates rate of response [a really important dependent-variable when it comes to the study of behavior if you think about it] to VI parameter.” Or, “this is the function that relates rate of response to VR parameter.” To the extent that the empirical functions for all subjects examined have certain properties in common, one has a fact in the bag. But how do the two functions relate to each other? There is no basis for comparison at this direct phenomenological level. Hell – the parameter value for the VI schedule is in temporal units and that of the VR in terms responses/reinforcer. So…you could obtain individual parametric functions for every schedule (and this wouldn’t involve hypotheses – simply manipulate the parameter in several subjects, find what similarities emerge for a schedule (like the shape of the function), and call it a fact-in-the-bag! But – again – what about some kind of statement about all schedules?

          Well…there are variables common to all schedules and are probably functional parts of the dynamic system. Rate of response, rate of reinforcement, average (or median etc.) reinforced inter-response time, temporal distance from the food-producing response and the delivery of the food (in schedules that allow some temporal distance). Number of responses/reinforcer etc. You could also include some sort of measure of the discrepancy between all inter-response times (IRTs) and reinforced IRTs. All these variables interact to determine the levels of each other. That is, they comprise the dynamic system. So…now we know some likely important variables that are relevant to all schedules (even schedules where the variable is a constant…maybe 0). So…let’s just characterize this system. But how to do that? The system cannot be simply characterized by direct manipulation of the individual dynamic variables because when you change one of these, you change all or most of the others. If I alter rate of reinforcement under an interval schedule, response rate changes, number of responses/reinforcer changes. The average duration of the reinforced IRT could change etc. So…that’s the complexity of the situation. And we already know how to attack the problem – produce theories concerning the way these variables interact quantitatively (as in a system of differential or difference equations) and then test the. Yes…that is one way to do it. And behavior analysis is starting to do some of that – I have my own view on this – but behavior analysts have been raised to prefer a direct empirical approach – why guess about (theorize about, hypothesize about) how many teeth are in a horse’s mouth when you can just count them? In any event…both are viable approaches to producing laws of schedule-controlled behavior relevant across schedules. But I haven’t really described the approach to schedules that doesn’t involve dynamically-interrelated variables…

          So…back quickly to VI and VR schedules…the difference in rates of response maintained under equilibrium under these two schedules have been seen as theoretically important. In general, VR schedules maintain much higher rates of response than VI schedules. Why? The schedules are fairly similar – the animal responds and occasionally – unpredictably – food shows up contingent on a response. If I maintain responding under a VI schedule, and count the number of responses “paid” per reinforcer (either on average, or during any given equilibrium session etc.) and “play those numbers back” as a VR schedule, the response rate will increase. Similarly, if I maintain responding under a VR schedule, and record the times in between reinforcers, I can “play these back” as a VI schedule, and the response rate will decrease. There are theories concerning this effect involving dynamically-interrelated variables. That brings up mathematical hypothesis testing – as I pointed out. BUT…there is a continuum that encompasses both VI and VR schedules – that is, these different schedules can be seen as the same kind of schedule, and the function relating schedule parameter(s) and response rate can be directly obtained. This can be accomplished by looking at a two-parameter arrangement; one specifies the time that must pass before a response is reinforced (as in a VI schedule) and the other specifies how much time is subtracted from the timer controlling this (thus making it so responses “count towards reinforcement” as in a VR schedule). These two parameters produce a schedule space that have VI and VR schedules as limits. That is, aspects of behavior under schedules within this space MUST be a function of two known variables, and this function is obtainable simply by manipulating the variables and noting the regularities in dependent-variables. This directly-empirical character is what is not possible with dynamically-interrelated variables as one cannot experimentally alter just one variable. But, of course, whatever theory one has concerning dynamic variables must explain the characteristics of the empirically-derived function. Sorry this was so long-winded.

          Phil: Maybe not ‘dynamically’ but ‘dynamically’ doesn’t matter: if the system behaves differently under one set of ‘constraints’ than under another, then where you end up could depend on how you relax the constraints.

          GS: This is a different issue than the one I brought up.

          Phil: Perhaps I’m still talking past you, in which case I apologize.

          GS: I think I get what you are saying. Anyway…I could have made the argument in the abstract, but I figured I’d use the example of schedule-controlled behavior – a field that is, I think, enormously important. After all, response-consequence relations are the core of complex human behavior (or so some claim), and all response-consequence relations imply scheduling. And, for example, think about what schedules are relevant to the “credit assignment problem” in AI. In a stream of behavioral events and other events, how is it that, speaking colloquially, the AI (or real animal) “knows which events are being caused by its behavior”? By comparing the effects of schedules that have, for example, various levels of response-dependency one gets this important aspect of behavior. What role does a programmed dependency between responding and events play? What about the temporal conjunctions that prevail – animals might not “detect dependencies,” so to speak, but may simply be “coincidence detectors.” Issues like that, you know. OK…I’m done.

      • “Two that should come to one’s alleged mind”

        Sounds worthy of comment — but I’m at a loss for how to comment on it.

        GS: A sort of behaviorist’s joke…since the “mind” is just an explanatory fiction, there’s no such thing, despite the way we use ordinary language or how mainstream psychologists talk. But behaviorists use ordinary language as well as an esoteric technical language and so might say, “keep in mind that…” even though, technically speaking, they don’t think that the “mind” is a place where something can be kept. Hence, “alleged mind.” A behaviorist must use ordinary language sometimes, but it conflicts with the assumptions of their philosophy of science in general and their philosophy of a science of behavior in particular.

        • GS: I’d assumed what you said in your response; I was thinking of something more humorous when I said “comment”

  9. The conclusion is both confusing and confused.

    Take this,

    “The point is that you can know the equilibrium state the system would be in if it weren’t constrained in the first place” …

    True. In your own words, “the tree would have ended up lying on the ground with its top facing north”.

    … “but that doesn’t tell you very much about the state the system will be in if you remove the constraints.”

    Absolutely false. It tells us much if one removed the rope on the left side. In your own words, “The tree would end up on the ground with its top pointing northeast, not north as would have occurred if the constraining ropes weren’t there at all”.

    It remains false if one removes the other rope. I quote again, “of course the opposite occurs if we slacken the rope on the right: now the tree ends up with its top to the northwest.”

    In each case, you have a different system; each system is predictable. You yourself predicted their outcomes!

    The systems are (1) the tree alone, without any rope, (2) the tree with only the right rope, (3) the tree with only the lrftt rope.

    “Calling something a ‘constraint’ reflects the way we are thinking about the system, it’s not something inherent in the system.”

    Sorry, but the combination of ropes is inherent in the system. Ask any physicist.

    • Some Anonymous Guy,

      Youre the second person (thus far) to understand my point without realizing it is my point.

      I am 100% in agreement with you because you are making exactly the point I am making. If you want to know how a system of a tree with two ropes behaves, you need a model of a tree with two ropes. You can’t say ‘I know how the tree with no ropes behaves, therefore I know what will happen I get rid of these ropes.”

      I told you it’s obvious.

      • Phil,

        “Youre the second person (thus far) to understand my point without realizing it is my point.”

        Great! But which of your two points did I understand?

        1. “The point is that you can know the equilibrium state the system would be in if it weren’t constrained in the first place”, which is true. or

        2. “but that doesn’t tell you very much about the state the system will be in if you remove the constraints.”, which isn’t?

        Or maybe what I understood was your third point

        “Calling something a ‘constraint’ reflects the way we are thinking about the system, it’s not something inherent in the system.”

        Which isn’t true, either. Like I said, confusing and confused. :-)

        • Uh, now I’m confused.

          In the case of the tree, I do have a model of how the tree behaves if I relax the constraints. That’s the only reason I can predict what will happen if the constraints are relaxed.

          If I did not have that complete model, but instead relied on a model that did not include the constraints, I would get the wrong answer.

          If you agree that, in order to predict the behavior of the system if the constraints are relaxed, you need a model of the system that includes the constraints, then you and I are in agreement.

          In that case I don’t know what to make of your points 2 and 3, but perhaps it is not worth bickering about the semantics.

  10. I like analogy very much, and I do think no-equilibrium models that take dynamics seriously don’t have the role in economics they deserve.

    Perhaps the Economists and Physicists can write down a complete San Francisco housing model and then, in the interest of setting an epic social vs natural sciences battle, policy makers in SF relax the constraint and we see what happens.

      • Dave, I had totally missed that! Very interesting idea. I like it a lot.

        I have been looking at an exactly opposite approach: a toy system with a finite (small) number of units and a small number of households, and specifying an explicit demand function for each household for each apartment. This is not as hard as it seems because the demand is a step function: above a certain price you don’t want the unit, below it it does.

        So you have gone fully continuous and I have gone fully discrete.

        I think your approach is better in general — more generalizable, and we could even dream about some combination of functional forms that makes it analytically solvable.

        Anyway thanks for pointing that out here, since I had indeed missed it previously.

        • Ultimately, if we plan to get a grant and do this professionally ;-) we should investigate both the discrete version with the step function demand/supply curves, and the continuous version, and the agent based version. But yeah, it’s a full time job.

        • In some sense, the discrete case when statistically aggregated across groups of similar income people and similar quality/location houses, should wind up being the continuous version, so you could imagine specifying a bunch of discrete step functions that are not perfectly aligned at the exact same step location, and when you aggregate them, you’d wind up with a hinge like Andrew’s recent post! For demand, above some number, no demand for the group (say a group of “service workers” with a similar income amount) then a smooth transition zone as price descends, and then after the transition zone some very steep line, maybe near-vertical.

          Similarly for a group of houses, as prices increase, landlords offer zero liquid supply, and then at some point a transition zone up to 100% of the liquid capacity, and then after that transition zone a flat maximum supply because nothing is left it’s all been offered

          Of course, these shapes change through time, but using hinges and logistic functions as the functional forms could be a good limitation that makes things more tractable.

      • Yeah, I have personal knowledge about plenty of physicists on wall street doing economic dynamics. I suspect that the pull of wall street prevents much of it leaking into academia. I do know there *are* economists interested in dynamics, but I also have the impression it’s a smallish niche within academic economics, though I don’t claim to have extensive knowledge.

        • I honestly hate this idea that all the best scientists end up on Wall Street and their superior scientific insights remain behind closed walls.

          I hope it’s not true.

        • Anon:

          It’s hard to imagine that all the best scientists end up on Wall Street, given that lots of scientists aren’t motivated by getting rich, don’t want to live on the east coast, think that stock trading is boring, etc.

        • Note: all my experience recounted below occurred on the West Coast (SF and Berkeley!) and none of those people traded stocks, they all worked on heavy duty problems in portfolio optimization in the presence of dynamic movements of assets and things like that. I know “Wall Street” is actually a street in NYC but actually a lot of finance occurs elsewhere and yet it’s often enough to still colloquially refer to all of it as Wall Street.

        • At this quantitative financial modeling level, there are plenty of really interesting dynamics issues, lots of interesting statistics problems, data quality and data analysis issues in the presence of incomplete information, plenty of massively parallel monte carlo problems to program, often lots of lower-level support (you can sometimes call up a data processing department and just ask them to extract and clean a particular data set for you) and so it’s actually a really intellectually stimulating environment that is anything but “boring” at least *when you’re near the top of the pile*. It’s really boring for a PhD physicist to be in that data processing department doing SQL queries all day, but that’s not where the physicists end up.

          That being said, there is a kind of tiered hierarchy with the middle-aged ex professors getting all the tasty intellectual morsels exploring things like time variable liquidity altering the market impact of large trades, or the synchronization between market movements in oil companies with market movements in transportation companies, or whatever, and the smart and eager Bachelors degree or Masters Degree guys doing all the more boring heavy lifting, running last years model on updated daily data or adding a new filtration routine to detect transcription errors on the stock ticker, and the guys with IT degrees keeping clusters of computers running and changing out broken hard drives and building ever bigger SQL databases…

        • I wouldn’t say “all” but I will say that it’s a common place to go after the incredibly small supply of academic jobs leaves you in an academic dead end. Most of the people I knew who wound up in finance out of high quality science paths came from astrophysics, solid state physics, pure math, and other areas where there are tens or hundreds of qualified people per new academic job offering.

          I interviewed at Barclays Global Investors in about 1997 and it was one high end ex assistant professor after another, almost all out of physics or pure math. BGI, the people who created iShares, was sold in 2009 to BlackRock for $13 Billion, back slightly before you could sell a completely “does nothing” company like “WhatsApp” for $22 Billion to Facebook. At another finance company I interviewed at in around 2001 the head of the company was an ex Principal Investigator at Los Alamos or some such thing (Lawrence Livermore Natl Lab? I can’t remember). They were used to working with multi thousand core computing systems to calculate the dynamics of individual sub-atomic particles in the nanoseconds after the initiation of nuclear explosions or internal fusion or whatever.

          One thing that is definitely the case about all of these people is that *they knew a lot about modeling dynamics*

        • Also I have some knowledge of recruitment to hedge funds out of places like Harvey Mudd a high end exclusive hard sciences undergraduate school down here in the LA area that several of my friends went to. It would surprise me if wasn’t happening at Caltech too.

          All of this was before the 2008 implosion of the finance industry, so I bet there was a big slow-down between 2008 and say 2013 with now a lot of competition for this talent from places like Google, Facebook, and soforth, with everyone wanting to do “AI” or “Machine Learning” or whatever on network effects identifying individuals who heavily influence buying patterns of other people (You want a new (gaming PC / commuter motorcycle / jogging stroller / high end mattress / complete gourmet cookware set)? Call Joe/Jane, he/she knows all about that stuff… so advertisers who identify “Joe/Jane” and target him/her get a lot of leverage across many people) or dynamic pricing of Uber rides or whatnot.

          I’ve heard it from people in pharma as well, basically saying that moving out of academia into high end companies they wound up in environments where they were continuously collaborating with other intelligent people, where their intellectual abilities were valued instead of snarked at, where they spent less time worrying about money and had access to all the tools they needed… I definitely think this has an effect on the science quality within academia.

  11. The econ jargon for this concept is path-dependence and multiple equilibria. Econ equilibria aren’t as well defined as physics equilibria; the falling tree could be in an equilibrium in the economics sense in that it is following a path towards the ground that is determined by a system of equations.

    • I think this is a fundamental misunderstanding by many in the field of Econ. Evidence on these threads suggests that grad students in Econ and even professional Economists have a tendency towards a position you might call “everything described by equations is equilibrium”

      In any area of dynamics, you can write down some dynamic equation dFoo/dt = A+B+C+D… and then ask yourself “what happens if Foo isn’t changing” and then say A+B+C+D…=0 and get yourself a simpler equation constraining the A,B,C,D together.

      Econ has essentially taken this special case and placed it on a pedestal. Here’s the very dumb linearized version of economic dynamics:

      dPrice/dt = Liquidity * (Demand(Price) – Supply(Price))

      assume equilibrium: dPrice/dt = 0 implies either Liquidity = 0 or Demand(Price) = Supply(Price)

      assume transactions are occurring, and solve for Price such that Demand(Price)=Supply(Price)

      This is more or less the Econ 101 take on things. It’s *not* wrong, it’s just a very very special case.

      Looking at a nearby University of California school: http://registrar.ucr.edu/docs/16-17-ucr_general_catalog.pdf

      I see that you can get an undergraduate Econ major, and a Masters Degree, AND a PhD without ever taking more than 2nd semester calculus from the math department, much less both an introduction to Ordinary Differential Equations and an upper division course specifically in nonlinear dynamics with computing or partial differential equations.

      On the other hand, in Physics or Engineering you simply can not graduate from an undergraduate degree without maybe 10 semester long courses involving ordinary and partial differential equations.

      Search the same catalog for the undergrad requirements of Mechanical Engineering, in the first 3 undergrad years you need 6 math courses, 3 physics courses, and several engineering courses that are specialized physics courses involving newtonian dynamics, in the 3rd and 4th year you’d be taking thermodynamics, fluid dynamics, structural dynamics, structural vibrations, heat transfer, combustion, etc etc All of them involve predicting the time-evolution of various things through models.

      I should say to make it clear, that I am not here to get into a pissing contest about whose courses are better or who knows more. There are LOTS of important concepts that need to be visited in Econ which an ME would NEVER get around to learning about, but I do think that looking at the catalog gives a good idea of how important dynamics is in Econ. Apparently Econ professors who set up course requirements in catalogs simply don’t think anyone all the way through PhD level needs to take a course in ODEs or PDEs or numerical solutions of ODEs or whatever. And so, it’s not surprising to me that when a Physicist talks about “the market isn’t in equilibrium” to some Economists they have a hard time communicating with each other.

      All that being said, I am with Phil in that I totally understand the irritation that Economists have with “Physics arrogance” where a Physicist comes along and tries to school everyone in how it ought to be done. I don’t think that’s the way to go, but neither do I think Econ benefits from a virtually total lack of educating its students in basic concepts of dynamics.

      To give an example of something one might teach to econ undergrads, that I think Phil would appreciate. Suppose that you had smart meters on every household and smart thermostats. The utilities update their prices minute by minute and feed it back to thermostats through the smart meters. Suppose those thermostats made tradeoffs between comfort and price using very simple pre-programmed rules laid out in the lab assignment. Suppose we hypothesize a distribution of electrical generators with different efficiencies, and that the electrical utilities are required to sell their electricity at the average marginal cost of production due to regulations. Calculate a supply curve from those assumptions. Hypothesize a few other things of importance so that the problem becomes well posed (it’s a laboratory problem for undergrads, so we’ll have to spoon feed important bits to them…). If you change the rules on the thermostats from rule set A to rule set B what is the difference in time evolution of the production quantity and the price of electricity throughout the day on high heat days, on low heat days? If a fairly large mid-efficiency power-plant goes offline suddenly, what happens to the price and the production quantity under rule set A, under B? What happens if you assume a random collection of 3 or 4 different smart thermostat rule sets?

      I strongly suspect the average masters student in Econ simply hasn’t had educational materials such that they could formulate this problem into a set of coupled ODEs and simulate them in Matlab or R. If this is true, I think it has important implications for how effective Economists can be at helping with real world policy issues in constantly changing dynamic environments. I’m pretty confident that by 3rd year undergrad, an ME student could read that lab with all the simplifications laid out, and do that project as a final project for some course.

      We could argue why it is that things are this way, or whether they should be some other way, but I think the fact that they are this way seems to be fairly plain to see.

      • Keynes’s famous quote: “The long run is a misleading guide to current affairs. In the long run we are all dead. Economists set themselves too easy, too useless a task if in tempestuous seasons they can only tell us that when the storm is past the ocean is flat again.”

        suggests exactly that he also thought that economic dynamics were an important thing to be studied, and yet his quote is misunderstood all the time: http://www.simontaylorsblog.com/2013/05/05/the-true-meaning-of-in-the-long-run-we-are-all-dead/

        I think it’d be a great thing if every single Econ undergrad could when they graduate have done at least 2 semesters of working with models of economic dynamics such as the sketch I laid out above so that they had the toolbox to understand the very basic known ways of describing dynamics. I don’t see how that could be a bad thing for society.

      • If you would like to learn about how economists think about “equilibrium”, a great place to start would be A Fine Theorem’s discussion of the contribution of the amazing Kenneth Arrow to what economists refer to as “general equilibrium”:

        https://afinetheorem.wordpress.com/2017/02/27/kenneth-arrow-part-ii-the-theory-of-general-equilibrium/

        (A Fine Theorem was one of the early commenters on Phil’s first post on YIMBYs. A wonderful blog; his stuff on economic theory is just great.)

        NB: differential equations are in the core toolbox taught to MSc/PhD students. Used more in macro that in micro. Not usually part of an u/g curriculum unless it has a heavy math emphasis. But if you want to get a sense of where the debate is on what to teach undergrads – and the debate has been pretty heated since the 2008 crisis, with good reason – the CORE project is a good place to start:

        http://www.core-econ.org

        • Thanks Mark. You’re right that blog post is great. I have heard of general equilibrium and had a cartoon version of what it means to Economists. Reading your blog suggestion doesn’t disabuse me of that cartoon, though it points out that lots of good stuff has been done. The “general” part of “general” equilibrium seems to be to be about the fact that in truth the economy of the US is say 320 million people * some similar number of goods, and therefore requires vast quantities of coupled differential equations (10^15 or so maybe?), and because what happens in the market influences different people’s desires for things and soforth, through time supplies, demands, etc can change, and we might want to get to a point of “equilibrium” without a division between “the small system I care about” and “everything else”, with “all else equal” assumed for the “everything else”. That’s a pretty good thing to have figured out back in the 1950’s

          Similarly, a bar of steel is 10^23 molecules or 10^15 crystalline domains. Surprisingly there are useful ways to discuss the behavior of a bar of steel that don’t involve 10^23 ODEs. And I say surprisingly absolutely without irony. It’s kind of amazing that we get anywhere. Nevertheless, we do, and we can talk about things like the equilibrium shape of this bar when loaded according to some static loads. But beyond that, we could also imagine explicit changes in those loads, and stop talking about the equilibrium shape, and start talking about the dynamic shape as a function of time.

          Deep into that very nice blog article I find the following paragraph: “Worse yet is stability, as Arrow and his collaborators (1958, Ecta; 1959, Ecta) would help discover. Even if we have a unique equilibrium, Herbert Scarf (IER, 1960) showed, via many simple examples, how Walrasian tatonnement can lead to cycles which never converge. Despite a great deal of the intellectual effort in the 1960s and 1970s, we do not have a good model of price adjustment even now. I should think we are unlikely to ever have such a theory: as many theorists have pointed out, if we are in a period of price adjustment and not in an equilibrium, then the zero profit condition ought not apply, ergo why should there be “one” price rather than ten or a hundred or a thousand?”

          Which basically reiterates in Econ terms what I am trying to say here. Why SHOULD there be one price? Why should Economists even think that “figuring out the one price” is the goal? Do Engineers ask “what is the one position of the bar of steel?” or do they ask “what is the position at time t of the molecules of the steel bar that started near point x?

          Pretty obviously there isn’t “one” price for a 1 bed flat in SF. Why SHOULD there be one temperature of the atmosphere in all of Los Angeles County? It’s more useful to model the weather as a dynamic process than to prove “there exists a simultaneous temperature vector (a simultaneous clearing price in the market) at each point in the atmosphere”

          Throwing up your hands and saying “there is no ONE price” just shows that Economics hasn’t gotten as far as Wall Street has. Wall Street is very happy that there isn’t “one” price, because they get to arbitrage between the different prices and make money. It’s no surprise they hire a lot of dynamicists.

          I think that CORE site seems great! Their chapter 9 bullet points are like everything I”m saying here:

          How prices change, and how markets for labour and financial assets work

          People take advantage of rent-seeking opportunities when competitive markets are not in equilibrium, often eventually equating supply to demand
          Excess supply—unemployment—is a feature of labour markets even in equilibrium
          Prices are determined in financial markets by trading mechanisms and can change from minute to minute in response to information and beliefs
          Price bubbles can occur, for example in markets for financial assets
          Governments and firms sometimes set prices and adopt other policies so that markets do not clear
          Economic rents help explain how markets work

        • Then there’s stuff like this: http://www.econ.yale.edu/~dirkb/teach/pdf/j/jensen/2007-thedigitalprovide-slides.pdf

          I love this.

          Clearly, there wasn’t *one* price for fish, until cell phones hit. And then the super noisy transaction data became way less noisy and clustered around a single price with tiny fluctuations.

          Dynamically speaking, the dissemination of information becomes much faster, so the convergence to a single price occurs much more rapidly. Like taking a flexible musical instrument string at very low tension, so that it wobbles all over the place and tightening the crap out of it until it barely deflects at all for the same pluck.

        • Or like this: https://blogs.harvard.edu/michaellaw/2013/11/11/optimal-control-theory-embracing-it-in-monetary-policy/

          Indicating that Economics has finally discovered other things that Engineering and Physics have been doing since the 1950’s

          If only unemployment and standard measures of inflation were actually as meaningful as implied by making them the only thing the Fed should care about.

          How much better would outcomes have been for individuals if Yellen had used say GDP/Capita*fraction of gdp to wages * labor force participation rate / ConsumerExpenditureSurveyBasedCostOfLivingPerPersonPerYear as her human welfare control variable: something like this:

          https://fred.stlouisfed.org/graph/fredgraph.png?g=dPZw

          And used CPI + Stock Market Capitalization / GDP as her inflation variable?

          https://fred.stlouisfed.org/graph/fredgraph.png?g=dQ0Q

          by which two measures, we’ve had about 40% inflation since 2010, and 10% improvement in wages

          But that’s just complaining about policy. As far as methodology goes, the dynamic control approach is the right way to do control (if control is something we’re going to do), we just need to get on board with optimizing and controlling the right quantities.

        • These ideas are quite old. There’s a fascinating summary of differences in dynamic optimization methods and how they possibly led to differences in policy prescriptions at Beatrice Cherrier’s blog, https://beatricecherrier.wordpress.com/2014/03/24/economics-as-engineering-iii-carnegie-stories/

          A common, though certainly not universal, graduate textbook is “Recursive Methods in Economic Dynamics,” which is largely an introduction to discrete time control theory.

        • Sam: I love that article. I want to say more about it. Will read a little more and post something tomorrow. It reminds me of an issue I see in Engineering that you might call “the problem of the too-successful approximation”.

        • Glad you liked that blog post and found it helpful, and also that you like the look of the CORE project. I should have said this above but I have a peripheral connection to CORE project. Full disclosure and all that.

      • “All that being said, I am with Phil in that I totally understand the irritation that Economists have with “Physics arrogance” where a Physicist comes along and tries to school everyone in how it ought to be done. I don’t think that’s the way to go, but neither do I think Econ benefits from a virtually total lack of educating its students in basic concepts of dynamics.”

        This reminds me of something I recall hearing from a biologist a number of years ago, when the physics job market was poor and some physicists turned to biology. The biologist said that it typically took the physicists N years (I forget what N was) before they caught on to what biology was and just how it was different from physics.

  12. @Sam Baily from http://statmodeling.stat.columbia.edu/2017/05/21/obvious-fact-constrained-systems/#comment-493462

    Thanks for that fascinating article on the history of Econ by Beatrice Cherrier. I myself am extremely skeptical of the idea of having the Fed be any kind of “control system”. Even if we can imagine a world where effective control is possible and a good causal model of the macro-economic variables existed, it seems like a huge issue for regulatory capture and rent-seeking and gaming the system by the finance industry and soforth.

    But, I seem to be mostly swimming upstream on that, and the fact is The Fed will continue to try to “Engineer” the economy by controlling various variables. So the question is what would be a good way to do that, and how would dynamics and engineering ideas contribute to a choice of method?

    It’s interesting however in this history by Cherrier the role of two different styles of research in Econ/Operations Research/Systems Engineering. On the one hand, you have the very “practical” approach of dynamic programming and the simplex method and whatnot, and on the other hand you have a more theoretically grounded concept of optimal control, and the difference between these may have driven fights about what should or shouldn’t be done. No question that optimal control would have been hard to compute with uncertainty and soforth up until very recently, so in some sense simplified versions would be appealing.

    Now, this all reminded me of the role of Linear Elasticity and the Finite Element Method in Engineering. On the one hand, we have Newton’s laws, which are for Engineers pretty much exact (let’s ignore near light speed travel and quantum issues). And on the other hand, you have systems with a gazillion molecules and essentially no information about the state (say a pound of quartz sand in a testing machine). So, we developed things like Elasticity which solved a tremendous number of Engineering problems, and then to solve Elasticity problems we developed the Finite Element Method… and then decades later students learn Linear Elasticity and FEM as if they were exact. In other words, like Newton’s laws. The equations are known, and the only thing we don’t know is what values to put into the equations, and our job is “constitutive modeling” and all this other baloney.

    Well, that’s just WRONG. Sure, Linear Elasticity and FEM are hugely hugely useful, and you can extend them with some constitutive models to make them even more useful. But they’re explicit approximations, and constitutive models are pretty much hacks on top to tweak the approximation. The problem is, because among many students they are seen as if they were exact true equations of physics, the output of some of these models is taken to be predictive when it shouldn’t be. The whole thing is made more difficult by the fact that it’s hard to learn, and so any shortcut a student can take is usually taken, and if that means memorizing the equations and doing it all by rote, some of them will.

    So, that’s what I call the problem of the “too good approximation” when an approximation becomes good enough that down the road it is eventually learned as *god’s truth* and then the mechanism by which the approximation came about and the thing that’s being approximated can be sort of lost to esoterica.

    In some sense, I think Quantum Mechanics is the same thing. So tremendously good at predicting, that we forget that it is in some sense an approximation to something deeper (See John Stewart Bell’s “The Undivided Universe” in which he argues convincingly that QM is a theory that has an artificial separation between the macro classical and micro quantum, and any “real” theory has to be deeper, with some kind of universal wave function for everything). Students these days learn QM in terms of a set of Axioms!

    I think in Engineering we have our Linear Elasticity and FEM, and in Economics, we have Equilibrium. Equilibrium is of course just the simplified equations of dynamics for the special case of d/dt = 0 but I don’t think that’s what a senior undergraduate major in econ would say. They wouldn’t describe the idea of “fast” dynamics and “slow” dynamics, or the method of multiple scales, and talk about how your observational time-scale drives whether you can consider arbitrage to be “near instantaneous” and Supply(Price)=Demand(Price) is a good approximation for all of time, or arbitrage and information propagation “actually takes quite a while” relative to your observation timescale and you need to actually model the changes in Price over the duration in terms of how quickly information flows between actors in the economy. No, they’d probably just tell you the Guido Sarducci version

    https://www.youtube.com/watch?v=kO8x8eoU3L4

    • Daniel,

      You need to be a bit careful about generalising from the (very nice) blog post by Cherrier. It’s about the history of macroeconomics, not about economics in general. Macro is just one field of many.

      If you want to get a sense of where economics is in general, besides “A Fine Theorem” I recommend reading Noah Smith (trained as a physicist, then got an econ PhD, now a Bloomberg View commentator), either at his blog, sample here where he discusses NIMBYism and Phil’s posts:

      http://noahpinionblog.blogspot.com/2017/05/the-nimby-challenge.html

      or in his Bloomberg View columns, sample here:

      https://www.bloomberg.com/view/articles/2014-12-31/heres-what-economics-gets-right

      (also with an SF link, namely McFadden’s ex ante analysis of the likely impact of BART)

      • Mark: I agree with you that the history of Macro is only a history of Macro, but we were discussing my point about Yellen turning to Engineering ideas from the 1950’s to decide how to go about doing the job she’s been tasked with by the government, namely control CPI and Unemployment (and for the moment, let’s ignore my complaint about that being probably a bad idea and having some specific example ideas about alternative variables to control).

        Noah Smiths example articles are interesting, but I think he misreads Phil. First, I think labeling Phil NIMBY is unwarranted. I haven’t anywhere heard Phil say that he opposes building, only that he doesn’t think that YIMBY arguments for how it will reduce prices for service workers are valid.

        Second, I don’t think Noah does a good job of characterizing Phil when he says

        The first thing to note is that NIMBYs think that a house’s price is defined when it’s built – almost as if the price is built into the walls. Price writes:

        [N]ew high-rise apartments are going in that have hundreds of apartments each, typically with a rent of $4000 – $8000 per month. If you let a developer build “market rate” apartments, that’s what they’ll build.

        Phil is just quoting the spot price, the initial price he expects these to go for on the market. Price of the house isn’t built into the walls, but initial price is whatever the initial clearing price is, and once it’s observed it’s a fact of life that on day X the transaction price was P and I think guessing that P will be in the 4 to 7k range is probably right. Top-coding in the ACS is slightly below 4k, and there’s a full 5% or so of ACS households renting at the top-coded value.

        Next Noah says:

        Later, Price repeats the fixed-price idea when he writes:

        Sorry, no. If the ‘market rate’ for newly developed apartments is substantially higher than the median rent of existing apartments, then building more market-rate apartments will make median rents go up, not down.

        That sounds like simple math. And if the price of an apartment was somehow built into its walls and floors, it would just be simple math. In fact, though, it’s wrong. Here’s why. …

        And then proceeds to describe what seems likely to be a multi-month to multi-year relaxation period in which the distribution of prices re-adjusts *while all else remains equal*. Most apartments rent on a one-year lease, so even non-rent-controlled apartments don’t equilibriate to new conditions in less than 6 months to 2 years. But I suspect that renting out an additional 500 luxury apartments will take only a few months, there’s plenty of demand! Also, Phil has been explicit about being interested in the distribution of observed rents, not the distribution of the spot prices for the marginal turnover.

        So, it’s perfectly compatible that both are right. Initially, simply mechanically by increasing the number of apartments available, and renting them at high rents, the rent distribution shifts up. Now, through time some stuff is going to happen, but on the time scale of 6 months to 2 years, we have a lot of dynamics going on, including changes in jobs, wages, and demand for apartments. So *holding all those other dynamics constant* the equilibrium price *would* shift downward, except *all those other things aren’t constant*. From the perspective of the equilibrium price model, we shift the supply of high quality apartments at $4000/mo price upwards, then redraw all the supply curves for all the other qualities, then redraw all the demand curves for all the other qualities, then calculate a whole bunch of points, one for each quality level, and with all the moving parts involved, I think Noah’s assessment is inappropriately simplistic.

        Later he says:

        If you think that demolishing luxury apartments would have this latter result, then you should also think that building more luxury apartments would do the opposite

        Which is also I think inappropriately simplistic. Why can’t the price of apartments in SF just continue to go up no matter what you do? Especially thanks to The Fed + Finance industry + Tech?

        Evaluating a dynamic market *as if* it were a tiny perturbation to a static market is appropriate *when it is a tiny perturbation to a static market*. So for example if no tech boom had happened, and in 2011 they had built 500 more luxury apartments, I suspect Phil would have a different prediction for the dynamics in the 3 years following the opening of those apartments, and that would be appropriate because in fact the dynamics would have been a lot different.

        So I don’t think your examples are disabusing me of the notion that Equilibrium is the “too good approximation”. It’s used even when it’s inappropriate, because it is taken as a bit more fundamental than it really is. The fundamental thing is bids, asks, clearing prices, and volume, with each one responding in time to external factors like tech bubbles and growth in the job offerings, and soforth.

        So,

        1) Economics is *really important*. We can absolutely make millions of people miserable by doing a bad job of economic policy, or do the opposite with improved policy.

        2) Lots of Economists are plenty intelligent enough to do a good job at their specific research. I am not impugning individuals in any way.

        3) There *are* people talking about dynamics, I know that’s true. And elsewhere we’ve even seen suggestions to change the undergrad curriculum to discuss dynamics.

        4) I wouldn’t be bothering with all of this if I didn’t have a lot of respect for the topic, but I don’t claim to have extensive knowledge of the state of the profession, and I really appreciate the links.

        5) But, I still think explicit models of dynamics are the only ones that will tell you what occurs in a highly dynamic environment, and they seem to be an outside-the-box idea even to Physicists turned Economists like Noah.

        • Noah’s thought experiment addresses partial(partial(P(1),dt),dQ(3)) (the rate of change of the rate of change of the price at quality 1 with respect to the supply at quality 3) and this can be negative, and still d(P(1))/dt is positive (ie. the prices at quality 1 go up in time).

        • I think you should post this at Noah’s blog. He’s pretty responsive, and after all you are commenting directly on his post.

          I disagree with you about #5 as a general point (you make a strong claim when you use the word “only”). Modelling fixed points and what will move them around can [sic] be very informative even if the dynamics that justify their local stability aren’t clearly spelled out or explicitly modelled. And in contrast to what you suggest, modelling dynamics explicitly is not a “thinking-out-of-the-box” approach in modern econ. (Just because a mechanic uses a specialised tool only from time to time doesn’t mean s/he is unaware of its presence in the toolbox or is unwilling to use it when s/he thinks the situation requires it.)

          But if you repost at Noah’s blog (a) we can take it up there, and (b) you will probably get more, and better, comments there than you would from me anyway.

  13. I don’t have anything to say about the metaphorical point of this post, but I did want to mention how many fewer trees there are now in the San Fernando Valley than when I was a kid. Some of that is due to apartments replacing houses, but mostly it seems to be due to homeowners deciding their trees are too much of a hassle due to roots getting in pipes and dangers from falling over, and thus either uprooting them or not replacing them when they die.

Leave a Reply to Martha (Smith) Cancel reply

Your email address will not be published. Required fields are marked *