He says it again, but more vividly.

We’ve discussed Clarke’s third law (“Any sufficiently crappy research is indistinguishable from fraud”) and that, to do good science, honesty and transparency are not enough.

James Heathers says it again, vividly.

I don’t know if Heathers has ever written anything about the notorious study in which participants were invited to stick 51 pins into a voodoo doll representing their spouses—no, I’m not kidding about this one, it was just done by some bigshot professor who calls himself a “myth buster,” has over 199 peer-reviewed articles (he said “over 200” but it turned out that at least one was copied), and has testified before Congress, so no big deal. That voodoo study is hilarious, worth a good rant or two if Heathers has it in him. . . .

45 thoughts on “He says it again, but more vividly.

  1. Excellent statement by Mr. Heathers. Identifying what happened is easy. I identifying what it happened is mostly impossible.

    In structural geology (faults, folds and such) , we think of three basic levels of analysis: geometry (shape), kinematics(motion) and dynamics(forces). I love this. It can be roughly generalized as: what it is, how it got that way, and why it got that way. Each level of analysis has higher uncertainty. Dynamics, or the why, is by far the most uncertain and often can’t be known with certainty.

    I wonder, though, what Mr. Heathers might say about the soup bowl experiment – or lack thereof, as the case may be. I suppose, rather than imbuing Wansink with motives, he would say there’s little evidence that the experiment actually occurred.

  2. My only complaint about that article was the implication, so widespread among nonacademics, that “department chair” is some kind of honor rather than a terrible misfortune that afflicts those who cannot sufficiently disguise their competence: “He could make department chair in two years if he keeps his productivity up.”

    • + 1/2 — on the one hand, there are a lot of people (maybe most academics) who think that being department chair is “a terrible misfortune that afflicts those who cannot sufficiently disguise their competence”, but there are also some who think it is a position of prestige or power or something else they crave.

    • If I had the opportunity to grab the reins of research organization I’d jump at the chance. It’s an opportunity to go beyond the contribution of individual papers build the future of an institution and a discipline. I think a lot of people would jump at the chance to do that.

      • I don’t think department chairs do have the reins of research organizations. The PIs themselves control their own research, and to the extent anyone controls them it’s the granting agencies and the tenure committe. What the department chair can do is affect decisions like hiring, bridging funds between grants, who teaches what, how to spend some small amount of money on RAs and TAs… sometimes there are departmental specific donations… But it’s very second-order stuff… The department chair wants to increase the emphasis on topic X… so they can maybe affect a couple of hires over a 5 year period, and a couple of tenure cases… but they certainly don’t entirely *determine* those…

        In some departments the chair is something that rotates among the faculty. Each person has to take a turn taking out the trash and washing the dishes essentially.

        • DH doesn’t have absolute control but can very important in dept focus and – critically – in outside fundraising. In disciplines w/ industry support, the potential for external funds is substantial. Were taking major research funding and endowed chairs. So, yes, the DH can be very important.

  3. We have to expect that society and science will benefit far more from further investment in one of his character’s career than the other. So we remain obliged to assess questions of research quality and fraud.

    • That’s right, everyone is likely to put some kind of bug or data coding error or whatever into something they do at some point. When these are randomly distributed around without being actively hidden they will get discovered eventually and corrected.

      But someone who purposely fakes data, or purposely does bad research practices because they either don’t know better or were taught it was correct or don’t really care… that person needs to be removed from science because they are actively polluting. It’s the difference between a boater who spills a couple drops of fuel overboard when they are fiddling with the fuel cap and the captain of the Exxon Valdez.

  4. Her latest study is a model of good scientific practice and prudence.

    [..]

    The only problem is: at a crucial point in an analysis, she used the common logarithm rather than the natural logarithm. The numbers for the control group are therefore a lot lower than for the intervention group

    The NHST-motivated studies focused on comparing treatment to control aren’t good science to begin with… she should be measuring how the phenomenon changes over time under various conditions and modelling that process. The different conditions should be resulting in different model parameters with theoretical meaning.

    And what procedure is messed up by using the wrong base for a logarithm? Finally, that is something that should be discovered by reviewers when they do a sanity check of the result. Its persistence indicates a problem with the publication process (does not require shared code, lazy reviewers, etc).

    • Do you get tired writing this same comment on every blog post? It’s not constructive in any way, and the vague methods you describe are not suited to all settings.

      In some contexts, what you describe simply isn’t useful. This is what people try to do in structural economics and, quite frankly, we learn almost nothing of empirical value from those studies. NHST-motivated studies done well can provide insight, that’s been proven time and again in economics. Are there bad studies? Of course. But social psychology is low-hanging fruit, and I really don’t know why it’s brought up so often on this blog; it’s boring and terrible work.

      Whether structural modelling can provide useful insights outside the domains of the hard sciences (that do not have many of the issues inherent to social science problems), that remains to be seen. I actually think that adding theory to empirical economics papers incentivizes bad empirical work in practice (in theory, no pun intended, it should help, I would definitely concede that). Once a theoretical framework is presented, the reader tends to only think about the data presented within that framework. Further, p-hacking happens just as often when you have a theoretical framework as when you don’t. I have RA’d for professors where were tied to a theoretical framework (that they had used successfully in past work, for example), and we just kept massaging the data until the parameter estimates agreed with the model’s theoretical predictions. That’s not NHST but it is terrible work just the same. A final point, kind of just specific to publishing, is that if you have a nice theoretical framework and can *just* get the data to agree with the model, you can have a nice publication. Conversely, if your analysis is mostly empirical, the scrutiny of the empirical work is much more intense. The result I think is that you have these papers with pretty complicated and interesting theoretical models coupled with very mediocre (or worse) empirical work getting published in top empirical economics journals.

      I think some readers on this blog who have backgrounds in physics or biology may be severely underestimating the usefulness of complex models in domains like economics. In economics, we can’t explain jack****; that is, we are dealing with models that have “R2″s of like 1-5%. It’s not about fit the way it is in some other domains. Choosing the parameters that make the model fit the data the best is pretty useless, as odd as that may sound.

      • Do you get tired writing this same comment on every blog post?

        Yes, it is extremely annoying seeing people repeat the same wrong thing over and over.

        NHST-motivated studies done well can provide insight, that’s been proven time and again in economics.

        Please give a single example. I assure you any insight is totally incidental to the stated goals of the study.

        • The arrogance is a bit unbecoming.

          Let’s keep it simple. What is your issue with Card’s study of the Mariel Boatlift? If you aren’t familiar (but I’m sure you are), he looked at what happened to low-skilled wages in Miami in comparison to a few other reference cities in the months following the large influx of Cuban immigrants in 1980 (about 120,000 immigrants). Since the original study, some statistical issues with the analysis have been raised. But let’s focus on your theoretical qualms with using the “NHST” paradigm (not sure what this even means, by the way; is it any study that uses a p-value?) in this context.

          And please don’t just say “where’s the model?”. These are useful reduced form estimates that can be used to inform the theoretical literature. In fact, a very simple model of the labour market will map these estimates directly to a critical parameter – wage elasticity of labour demand.

          link: http://davidcard.berkeley.edu/papers/mariel-impact.pdf

        • What is your issue with Card’s study of the Mariel Boatlift? If you aren’t familiar (but I’m sure you are), he looked at what happened to low-skilled wages in Miami in comparison to a few other reference cities in the months following the large influx of Cuban immigrants in 1980 (about 120,000 immigrants).

          Never heard of it.

          1) What is the insight people believe has been gained from this study?
          2) I just skimmed it but this study looks descriptive to me.
          3) This type of data doesn’t look conducive to learning much about unemployment/wages as a function of immigration. It just looks too messy and there is too little of it (why only 1979-1985 for a datasource around since the 1940s and paper published in 1990?).

          But in general any model that assumes the economic system of the US behaves anything like a “free market” is going to have a lot of problems matching reality. Most of your time should be spent modelling whatever the government and central banks are doing to affect the data rather than “supply and demand”, etc.

          But let’s focus on your theoretical qualms with using the “NHST” paradigm (not sure what this even means, by the way; is it any study that uses a p-value?) in this context.

          This is any study that tests a default hypothesis of “no difference” or “no correlation”. This hypothesis is always false. When sufficient funding (~sample size) is attained to reject the hypothesis at the chosen significance level, then some conclusion is drawn about the researcher’s real hypothesis (ie, an affirming the consequent error is made).

          In general I say go ahead and use p-values to check your hypothesis.

        • I looked at this paper, it appears to be about the effect of a particular influx of low skilled Cuban immigrants on the wages of low skilled immigrants in Florida.

          And yet, nowhere is there a graph of wages through time from say 3 years before to 3 years after the influx.

          Nowhere is there a graph of estimated wages if the influx had not happened.

          Nowhere is there a graph of supply of low skilled labor through time.

          Nowhere is there a graph of regional or US wide trends in low skilled labor

          Nowhere is there a graph of migration patterns among low skilled laborers

          It’s like they didn’t have the *first clue* how to do this research.

          But of course they *did* have the first clue, it’s just that economics as a field has its head in the sand and actively discourages good research practices because of some kind of macho thing about graphs not being rigorous or such baloney. My impression is that more recently things are better in econ. but it’s still got a crazy bias against good practices.

        • Table 3 in this paper seems to describe logarithm of wages through time broken down by ethnicity, where as far as I can tell (a table is about the *worst* way to present this data) the changes in wages are basically nil. The changes in the reference cities are also basically nil… so there’s that.

          This *should* be presented as several line graphs of dollar wages as a fraction of the same-year price of a basket of goods (real wages), normalized to the initial value in 1979. Instead it’s logarithms of real wages not normalized to 1979 but raw.

          They also present the unemployment rate in tabular form. This seems to show a pronounced increase in unemployment vs reference cities. The primary quantity of interest here would be something like equilibriation time scale (how many years does it take for unemployment to return to similar levels to the reference cities). Do they estimate that anywhere?

          Later they present horrible tables of actual vs predicted log wages… but the predictions don’t come from any theory at all, they come from linear regressions on the reference cities… That’s ok, the theory there is “but for the influx, things would have happened like they did in the reference cities” but it’s a pretty weak theory and we aren’t shown any goodness of fit of the linear predictions to the actuals in the reference cities or to the miami data in the pre-influx period. The predictions are pretty terrible on the scale of variation they observe in the pre-period and continue to be terrible in the post period…

          What would good “structural” theory even look like here? I doubt the economists involved would have thought along these lines, though they might acknowledge it to be a reasonable idea having heard it:

          We model the labor pool as a stock, with several kinds of flows in and out of it: migration within the US to other states, migration within the US within Florida, migration from Cuba…

          We use a differential equation to describe wages offered as varying through time with a forcing function described by demand and supply. Supply is modeled in terms of wage differences in the various locations: within florida, outside florida, etc and the “transaction cost” of moving from one place to another.

          We set up the system and infer various factors so that we reproduce the pre-period dynamics, we should probably have *monthly* data and at least several years before the boatlift.

          We integrate the system to the boatlift period, and then provide an influx of cuban immigrants to the system over a short term period….

          We then integrate the system for the next few years and see how well it reproduces the dynamics of re-equilibriation.

          If you can show me someone who *even thought to attack this question in an ODE format* I’d be reasonably shocked. Has this happened in the Econ lit? Please let me know, I want to give out kudos.

        • lol. Yes.. this was the surprising result of the paper: the arrival of 125K low-skilled workers appeared to have no effect on low-skill wages in Miami relative to the reference cities. Basic theory would predict a decline in wages in response to a supply shock.

        • Sure, the fact that there was little change in wages as measured by the CPS is interesting, but there is little here that can identify why, and there are LOTS of plausible reasons we should be investigating, yet they aren’t investigated (at least not here).

          For example, a big one that Andrew harps on a lot: how good are these measurements? Does the CPS capture anything like the real wages of low-income households? These are households that often get much of their income from cash jobs… They are also people who are not necessarily that reachable by the CPS. Also, I don’t know much about it, but does the CPS follow individuals, or is it looking at cross sections? If it follows individuals for a couple years it might have a multi-year lag before it shows the effect… Or even if it’s cross sections, if there’s significant migration flux in low skilled labor, it isn’t measuring a consistent thing through time.

          They show unemployment data, but did they look at labor force participation rate? Perhaps what happened is that those who were employed remained employed with little change to their wages, but a whole bunch of people were not working, and slowly spread around the country to find jobs…

          I harp a lot on econ because I think it’s a really important area that could be doing a lot better. Most studies I see are like scratches at the surface, a lot of “in this specific instance, it looks like x happened” not “x, y, or z each predict different things to occur during unusual event q, the actual data suggests that z is a more plausible mechanism for how q worked, and it is consistent across multiple cases of q occurring in multiple places…”

        • For example, a big one that Andrew harps on a lot: how good are these measurements? Does the CPS capture anything like the real wages of low-income households? These are households that often get much of their income from cash jobs… They are also people who are not necessarily that reachable by the CPS.

          Yea, I would worry more about problems with the data quality at this point. It reminds me of astronomy, everyone put up with the overly complex Ptolemaic model[1] until Tycho Brahe actually collected clean enough data to allow Kepler to develop his three laws and distinguish between what the two models predicted.

          If your data is messy junk needing, or just getting (for political reasons), all sorts of “adjustments” there is little hope for theoretical progress. I mean the paper says:

          Due to the unauthorized nature of the Boatlift, no exact count of the number of Marie1 immigrants is available, and there is little precise information on the characteristics or final destinations of thee immigrants.

          [1] https://upload.wikimedia.org/wikipedia/commons/thumb/0/0e/Cassini_apparent.jpg/1024px-Cassini_apparent.jpg

        • Daniel L. said: “it’s just that economics as a field has its head in the sand”

          At first glance, I read this as something less polite, but that might very well fit.

        • Thanks for these references. Here is the first Borjas paper:

          https://www.nber.org/papers/w21588.pdf

          I’ll note that this HAS quite a few excellent graphs (scroll to the end). I didn’t read it in detail but it seems much improved over the first one just based on the fact that it’s discussing distributions over unknowns and showing graphs and providing alternative controls etc.

        • This is what scarface was about right? How many of these people ended up directly or indirectly making money from the cocaine trade anyway? These “wages” could be total BS for a large percentage of them or inflated just due to an influx of drug money.

        • Here’s a summary of the academic Economics discussion about this data: https://www.vox.com/the-big-idea/2017/6/23/15855342/immigrants-wages-trump-economics-mariel-boatlift-hispanic-cuban

          If I were going to summarize this discussion I’d summarize it as “we don’t even know what actually happened much less why”

          This is probably due to several factors, but most of which are related to difficulty of measurement and the Econ profession’s love of unbiased unregularized estimators which are very noisy, and treatment of essentially continuous quantities as discrete (“low skilled, medium skilled, high skilled” etc) That’s also related to the love of unbiased unregularized estimators, you’ve gotta bin things if you are going to calculate an average….

        • One obvious question we would like to directly consider is the effect of *language skill* on low-skilled wages in that area. It’s hard to hire a Black American English speaker to work in a crew of Cubans who do demolition work, if the rest of the crew speaks only Spanish… fungibility of “low skilled labor” is a very flawed assumption.

          When the stock of spanish speaking low skilled laborers increases it’s entirely plausible that the desirability of spanish speaking labor increases compared to english speakers, while at the same time, the availability of large crews could drive down costs for certain types of work, thereby increasing demand for that type of work… so that for example the population that’s employed shifts, and among spanish speakers the wages neither decline nor increase, while among English speakers employment and wages decrease.

          Examining multiple models that involve these kinds of dynamic time-varying mechanistic explanations are what’s needed to make progress in Econ IMHO.

        • Looking it up more… I don’t see how that data can be valid when it ignores black market effects. Just look up stories about that time… you see Miami was in the middle of an economic boom due to drug money flowing in:

          it is said Miami’s real estate boom of the ‘70s and ‘80s was funded by cocaine smugglers looking to launder a lot of cash

          https://thebigbubblemiami.com/2018/10/04/the-big-problem-of-dirty-money-in-miami-real-estate/

          This is just junk data.

        • Cranks characteristically dismiss all evidence or arguments which contradict their own unconventional beliefs

          Yes, I always ask for evidence of NHST working or logical justifications for why it supposedly does. This is never forthcoming, all you get is one or the other obvious logical fallacies. Yet the minds of people who “believe” in NHST just because their career is based on it never change. In fact they are producing their peculiar version of intellectual pollution faster today than when the critiques began in the 1960s, that is why I left academia. The cranks are running the show.

      • Whether structural modelling can provide useful insights outside the domains of the hard sciences (that do not have many of the issues inherent to social science problems), that remains to be seen.

        No, it really doesn’t. That modeling can provide insights in social science has been demonstrated multiple times. The problem in economics isn’t modeling it’s the kinds of models economists spend their time on. There are however even excellent models in economics that provide excellent insights, like Schelling’s very basic agent based model of segregation. I’m told he originally “computed” it by hand on a checkers board?

        http://nifty.stanford.edu/2014/mccown-schelling-model-segregation/

        Above web page lets you compute the model for different parameter values (scroll down the page). If you set similar to 20%, red/blue to 20/80 empty to 50% size 50×50 delay 100ms, click reset to scramble, and then click start, you clearly wind up with “clumps” of red agents and blue agents. This is even though each agent is satisfied if it has only 20% of its own “kind” as neighbors. Like for example if 20% of people in the US are black, and black people want to live in regions where they are represented by about 20% of the population, and white people want to live in regions where they are about 20% of the population, they will wind up in clumpy concentrated neighborhoods of all black or all white.

        In economics, we can’t explain jack****; that is, we are dealing with models that have “R2″s of like 1-5%.

        I hear this a lot, but I think it just means that economists are on the whole terrible at theory. This is probably helped by their training in classical stats rather than a theory such as Bayesian stats that lets you estimate more complex mechanistic models. But I also think it’s inherent in the kinds of things that are “respected” in economics publications.

        • There’s no such thing as “an empirical model”. There’s models, and there’s data.

          If you take this model seriously, you could fit this model to data in certain cities. You could modify this model to account for more factors and fit that. You could propose alternative models that primarily have to do with external factors rather than internal factors… and you could fit that to data. It’s not like we don’t have census tract level microdata from the ACS or other sources…

          The model doesn’t predict individual outcomes, it’s more like a stat-mech model that predicts a distribution of outcomes in equilibrium, as well as some statistics of the dynamics, and it should be fit to observed statistics of outcomes in real situations, preferably in high growth regions of the country, like maybe Arizona or LA in the mid-20th centuries.

          But it’s just one of many plausible mechanistic models that should be investigated in many many situations…

          My point is, there are *any number* of interesting economic situations that are amenable to dynamical models and yet, I don’t see Economists building and fitting these dynamical models.

          Where is the model of the SF bay area housing market for the period between 2000 and 2020 ? There are a TON of very interesting dynamics going on there right now. It’s amenable to both a global PDE based modeling strategy (modeling spatial and price/quality spectrum distributions) as well as agent based models.

        • Ok, so we observe eq’m wages and employment. Please tell me how your model, which I hope involves labour DEMAND and labour SUPPLY (both unobserved quantities), will be fit with observable data?

          For SF market, same deal man; where is this magical data? Are you going to fit your model using observed price and quantity? This is useless, unless your goal is just to predict future price/quantity without understanding underlying S&D forces at all. Your problem Daniel is that you don’t seem to understand what endogeneity is. When the goal isn’t ML-type prediction, when your goal is actually to understand the effects of an intervention (e.g. 125K immigrants landing in a city), that model which was fit on equilibrium wages and employment (which is just a mishmash of demand and supply forces) isn’t useful.

        • > Please tell me how your model, which I hope involves labour DEMAND and labour SUPPLY (both unobserved quantities), will be fit with observable data?

          > For SF market, same deal man; where is this magical data?

          You’ve illustrated why I think statistics education in economics is holding the field back. If you have a predictive model, and it has some unknowns, either those unknowns affect the observables, or they don’t. If they affect the observables, then Bayesian fitting can constrain the values of the unknowns. If they don’t affect the observables then maybe the theory should be rethought, or we should observe some different things that they do affect?

        • Also ways in which economic theory holds back the field… you say:

          Ok, so we observe eq’m wages and employment

          in which you’ve already assumed that wages and employment demand etc are always in equilibrium. My own opinion is that they are *never* in equilibrium because there are constant perturbations to the system. Sometimes they may get near equilibrium, but other times, like when hundreds of thousands of people show up in one place, it could take years before they get to an equilibrium again.

          The process of dropping a bunch of people into a small space is a very non-equilibrium process, just like putting a golf ball is non-equilibrium until it finally comes to rest.

        • Have you heard of the Lucas Critique, Daniel? Those parameters from that predictive model will be useless when considering interventions.

        • Matt,

          The Lucas Critique (https://en.wikipedia.org/wiki/Lucas_critique) doesn’t say that parameters from predictive models cannot aid in discerning the effect of an intervention…mostly, at least from a cursory review of the Wikipedia page, it says that it is hard / not really possible to discern an effect of an economic policy from an observed state of nature due to changes in behavior; sometimes as a result of interventions / economic policy shifts.

          That’s a pretty basic critique of all models….also, has little to do with what Daniel is talking about. Moreover, it also goes on to suggest that the modeling proposed by Daniel is the way to go:

          “The Lucas critique suggests that if we want to predict the effect of a policy experiment, we should model the “deep parameters” (relating to preferences, technology, and resource constraints) that are assumed to govern individual behavior: so-called “microfoundations.” If these models can account for observed empirical regularities, we can then predict what individuals will do, taking into account the change in policy, and then aggregate the individual decisions to calculate the macroeconomic effects of the policy change.”

        • Thanks Allan, and I agree with you. If anything the take-away from the Lucas critique is that we need to model the actual causal dynamic processes if we want to understand what any given chunk of data means. It means precisely the opposite of what matt thinks.

        • The Lucas Critique says that predictions from models that are not structural, in the sense of modelling the underlying decision-making process of individuals (in the case of macroeconomic models, Lucas’ domain) will not be useful for predicting the effects of policy interventions. Examples of these models are those that of the time-series variety that using a bunch of aggregate indicators as predictors. So I don’t really see how I missed the point; one of Daniel’s proposals was a purely statistical model (PDE..) which seemed to involve no economics; I’m not sure how this would be useful in any way for actually understanding the housing market. As for taking a structural approach (Bayesian or not, really doesn’t matter) this has been done in economics by a great number of people without too much success! As a result of the Lucas Critique, modelling “microfoundations” in macro models became popular, but I’m not sure how successful these have been. There are many examples of reduced-form work in economics provided actionable policy insights; there are almost none of structural work doing the same. To be clear, “structural work” involves modelling the economics (e.g. the decision-making process by individuals, firms, or whatever it may be). This has not been that successful, because there are about 5 million models that provide the same fit (which is poor, by the way) and little means for choosing between them.

        • Matt, I apologize if I wasn’t clear, but the PDE model for example wasn’t intended as a “purely statistical” model at all. The point of a PDE model of housing is that it aggregates underlying economic decision making. At each point in space there would be a distribution of housing prices and qualities. At each point in time housing would become unoccupied by people wanting to move out, or dying, and people wanting to move in would bid on the unoccupied houses. It would have to be an intogrodifferential equation, since rather than a single statistic at each point it’s got a distribution of several statistics at each point and a nonlocal transport process…

          your critique basically mirrors mine: structural models in economics fall far short of the mark. My point just goes further in saying some of the reasons why. For example that Dynamics are ignored, that conservation laws are ignored (stocks and flows of people, money, housing etc), and that techniques for fitting leave the field unable to deal with the level of complexity required of the models. it’s no wonder success has been rather poor.

        • Daniel said,
          “there are *any number* of interesting economic situations that are amenable to dynamical models and yet, I don’t see Economists building and fitting these dynamical models.

          Where is the model of the SF bay area housing market for the period between 2000 and 2020 ? There are a TON of very interesting dynamics going on there right now. It’s amenable to both a global PDE based modeling strategy (modeling spatial and price/quality spectrum distributions) as well as agent based models.”

          Maybe a model of the Austin housing market beginning about now might be interesting. (See http://www.zonedoutfilm.com/watch-the-film.html)

        • Above web page lets you compute the model for different parameter values (scroll down the page). If you set similar to 20%, red/blue to 20/80 empty to 50% size 50×50 delay 100ms, click reset to scramble, and then click start, you clearly wind up with “clumps” of red agents and blue agents. This is even though each agent is satisfied if it has only 20% of its own “kind” as neighbors. Like for example if 20% of people in the US are black, and black people want to live in regions where they are represented by about 20% of the population, and white people want to live in regions where they are about 20% of the population, they will wind up in clumpy concentrated neighborhoods of all black or all white.

          Don’t see what insight this model provides. It seems close to tautological. If people prefer to live with their own kind, then people will live with their own kind.

          Perhaps the near-tautology is obscured because the last part of the quote above misstates the model. In your example, the model actually assumes that people want to live in regions where they are represented by *20% to 100%* of the population, not by “about 20% of the population”. The misstatement makes it sound like people prefer 20% to percentages higher than 100%, but that is not what the model assumes. The model assumes people are equally happy with anything from 20% to 100%. Therefore, it is not at all surprising that people end up with percentages higher than 20%.

          To implement what you are thinking of, you would have to set up the model to incorporate a dislike of segregation. This could be done by assuming people move if they are surrounded by more 10% more or less than their target value of their own type of people. (You can’t make the target for both types of people 20% as your post suggests because it will blow up I think.)

        • There are a wide variety of modifications to this model which have been investigated, the point of this kind of model in general is that emergent properties such as segregation (or whatever, there are agent based models of many processes) often occur in a non-intuitive way. There can easily be nonlinear responses to changes in parameters, and unintuitive results can occur, for example mechanistic agent based traffic models sometimes show that increasing the capacity of roadways in certain ways makes congestion and total delay WORSE.

          For example if you want to fight segregation in schools by offering say policy changes that make it easier for minorities to transfer between schools and/or monetary incentives etc, you may find that there’s a level of segregation you really can’t get below because even tiny levels of preferences like 5% in this model result in clumping that’s well above what your target level is… or whatever.

          I’m not trying to harp on this particular model, it’s just one I could easily find a link to as an example, but if you think the point of this model is it shows “preferences for similarity result in clumping” then you’ve missed the point. It’s a *quantitative model* that results in quantitative predictions about how certain kinds of quantitative measures of preferences influence the statistical properties of aggregate outcomes dynamically through time.

Leave a Reply to Dmitri Cancel reply

Your email address will not be published. Required fields are marked *