Benford’s Law suggests lots of financial fraud

This post is by Phil.

I love this post by Jialan Wang. Wang “downloaded quarterly accounting data for all firms in Compustat, the most widely-used dataset in corporate finance that contains data on over 20,000 firms from SEC filings” and looked at the statistical distribution of leading digits in various pieces of financial information. As expected, the distribution is very close to what is predicted by Benford’s Law.

Very close, but not identical. But does that mean anything? Benford’s “Law” isn’t really a law, it’s more of a rule or principle: it’s certainly possible for the distribution of leading digits in financial data — even a massive corpus of it — to deviate from the rule without this indicating massive fraud or error. But, aha, Wang also looks at how the deviation from Benford’s Law has changed with time, and looks at it by industry, and this is where things get really interesting and suggestive. I really can’t summarize any better than Wang did, so click on the first link in this post and go read it. But come back here to comment!

14 thoughts on “Benford’s Law suggests lots of financial fraud

  1. Wang’s article was really great. Has there been any work done on how observing Bendford’s law through this lens can be misleading?

  2. Doesn’t the law derive from an assumption of exponential growth? what if businesses are experiencing alternating up and down cycles? What if firms are experiencing infrequent large shocks in either direction?

    • No, it doesn’t rely on exponential growth.

      It only depends on an assumption of scale-invariance in the things being measured (which is a consequence of an exponential growth model, but other models also allow scale-invariance).

      Simon Newcomb’s original observation was that astronomers at the Naval Observatory in Washington were preferentially using the pages at the beginning of the log tables, as compared to the end. Those pages were more worn, and the pages at the end less so. (Newcomb wrote a paper about this long before Benford wrote his paper).

      In those days, if you had to multiply numbers, it was more efficient to convert the numbers to logs, add, and compute the antilog. No calculating machines then.

      His observation noted this fact. People were looking up logs more often when they started with ‘1’ than with ‘9’, and the frequency of look-ups decreased as the first integer increased.

      This phenomenon is really a consequence of observing data that are scale-invariant, that is, if the data of the phenomenon being observed is independent of a multiplication of the data by a constant.

      Scale-invariance is a consequence of exponential growth, but it is not the only cause.

      For example, suppose you are observing the land area of countries. Some countries are large, some are small. Nothing about exponential growth here!

      But it shouldn’t matter if you measure the land area in square kilometers, square miles, or square furlongs. The distribution should look the same, since the statistics of the first digit of the unit you use doesn’t care about the unit you use.

      Similarly, the astronomers using the log tables at the U.S. Naval Observatory were using numbers with disparate units that probabably didn’t care about the units being used [Think: Parsecs? Kilometers/Second? Arcseconds of parallax? Millimeters on a photographic plate?] The astronomers using these tables were just trying to multiply two numbers, from numerous sources.

      Not all data fit this scale-invariant criterion. But those that do will probably conform (approximately) to Benford’s (and I, as an astronomer, would prefer Newcomb’s) law.

      See http://en.wikipedia.org/wiki/Benford%27s_law#Scale_invariance

  3. Phil:

    I agree that the linked post is cool. The next thing I’d like to see is . . . what are the discrepancies? What would we expect to see if people are cheating? Something closer to a uniform distribution? Also, the sum-of-squared errors measure is probably ok but it will be sensitive to sample size (if sample size is small). It would be good to see the sampling variation.

    • Andrew,
      Given the somewhat cursory description of the dataset, I think the sample sizes are probably huge.

      I agree, I’d like to know what the discrepancy actually looks like — I would guess it’s closer to uniform, yes, but maybe all of the discrepancy is in the 9s or the 5s or something. I’m not sure what I’d learn from that, but I’d still like to see it.

  4. It’s a great post and getting a lot of well-deserved play, but there are other possibilities beyond fraud. For example, if certain classes of numbers which now need to be reported (but did not previously need to be reported – lots of changes since 1960) were not Benford compliant, this could account for the sum of squared deviations rising to .008.

    I suggested this in a comment at Marginal Revolution, and got a reply to this comment indicating that in many cases assets have to be valued now even when there is no active market to determine their current value (unlike the value of, say, 1000 shares of PepsiCo stock which can be precisely determined via a clear operational definition). One would not expect such numbers to be as Benford compliant as numbers determined by market forces, even if they were non-fraudulent.

    Wang’s post is carefully worded. I think Wang is appropriately using the term “Decreasing Reliability of Accounting Data”.

    • zbicyclist, I agree with you — that’s why I wrote “it’s certainly possible for the distribution of leading digits in financial data — even a massive corpus of it — to deviate from the rule without this indicating massive fraud or error,” and why I characterize Wang’s results as “suggestive” rather than, say “damning” or “conclusive.”

      That said, I think they do indeed _suggest_ that something fishy is going on.

  5. I like the line of thought but some of the conclusions seem overwrought to me. For example, she said “So according to Benford’s law, accounting statements are getting less and less representative of what’s really going on inside of companies.” Does she know what’s really going on inside companies?

    I also don’t like the sum of squared errors measure esp. when the error is a difference of probabilities… everything looks really tiny and it’s hard to judge whether one can interpret those ups and downs.

    On sample size, I wonder if the sample size is increasing over time. Are there more companies in 2000 than in 1960?

    • She doesn’t have to know what’s going on inside companies because the assertion you quoted a conditional one. That is, we can rephrase it as: if representative accounting data follows Benford’s law, then accounting statements are getting less representative over time. This is another instance of the careful wording noted by zbicyclist.

      • To me, it’s not careful. It is only conditional if one does not believe in Benford’s law. A sentence that says “according to Newton’s law, the force increases as mass increases” is not conditional either. Also, Benford’s law does not involve “what’s really going on inside companies”. And if I may add, “accounting statements” is also a generalization of the set of metrics she analyzed.
        I don’t want to make this into a very big deal but the analysis deserves a more precise statement; otherwise, it reads like Freakonomics.

  6. For those still reading this comment thread, just today Wang put up a note at the top of the post saying she’s found an error in her analysis and will make a revised post soon. As zbicyclist noted, her post seemed carefully phrased not to sensationalise and her note at the top goes so far as to say to ignore the results in the post as they now stand (but she’s leaving it up for posterity). Will be interesting to see the follow-up.

Comments are closed.