“Troubling Trends in Machine Learning Scholarship”

Garuav Sood writes:

You had expressed slight frustration with some ML/CS papers that read more like advertisements than anything else. The attached paper by Zachary Lipton and Jacob Steinhardt flags four reasonable concerns in modern ML papers:

Recent progress in machine learning comes despite frequent departures from these ideals. In this paper, we focus on the following four patterns that appear to us to be trending in ML scholarship:

1. Failure to distinguish between explanation and speculation.

2. Failure to identify the sources of empirical gains, e.g. emphasizing unnecessary modifications to neural architectures when gains actually stem from hyper-parameter tuning.

3. Mathiness: the use of mathematics that obfuscates or impresses rather than clarifies, e.g. by confusing technical and non-technical concepts.

4. Misuse of language, e.g. by choosing terms of art with colloquial connotations or by overloading established technical terms.

I don’t know that machine learning is worse than any other academic field, but I have noticed a sort of arms race by which it becomes almost necessary to use certain buzzwords, just because other papers are using those buzzwords.

One of my pet peeves is the word “provably.” Everyone wants to say that their method is “provably” wonderful in some way. OK, fine, but “proof” is just another word for “assumption.” Indeed, all Bayesian inferences are provably optimal, if you average over the assumed prior and data models.

I’m not saying that proofs are bad—a proof can give insight, especially into the conditions required for a desired assertion to be true. Sometimes a proof is helpful, other times there’s no real reason for it. My problem is that, once there’s the expectation that a CS paper will have “provably” in its abstract, then that can create a push for everyone to have that “provably” there, just to compete.

Again, this is not a problem unique to CS. The problem is, once there’s a culture of hype, it can sustain itself through the competitive review and publicity process.

5 thoughts on ““Troubling Trends in Machine Learning Scholarship”

  1. What do you think about ML and deep learning in particular accepting experiments as valid papers? Akin to physics experiments, “see this cool new find”, but not actually trying to theoretically justify it or explain it.

  2. There certainly are mathematicians who write papers with theorems that are somewhere in the nexus of weak/essentially known/trivial. Having said that, mathematical proofs of theorems have to be an important part of any theoretical paper. To my taste anything else is merely an opinion. I think that if some of the statements of the form ‘provably’ were forced to be theorems with hypothesis and a conclusion, one would have a much clearer picture of the import of a given development

    • Ag:

      You write, “mathematical proofs of theorems have to be an important part of any theoretical paper. To my taste anything else is merely an opinion.”

      But there is something that is different from theory and from opinion. There’s data; for example see example. That said, data + theory is better than data alone. For example, in that linked paper, the data were valuable, but I think it would be hard to understand how to interpret the results in the absence of theory.

  3. Any proof is only as valid as its assumptions, and any theory is only as valid as its predictions. But a good model must be both internally consistent (so that the same or similar inputs produce the same or similar outputs) and externally consistent (so that outputs match observations). A good model generally relies upon proofs (if only elemental ones) and is structured according to some theory, so you really do want all of these approaches to flourish. There’s nothing wrong with people focusing on just one approach, so long as they don’t try to substitute their proofs for a model, which I think is part of your complaint. Indeed, if your observation is correct, there are huge opportunities out there–both in terms of discovery and career advancement–\for those who can fill the theoretical gaps and construct useful, falsifiable models in machine learning. Hopefully, basic competitive forces will drive researchers to go that route sooner rather than later.

Leave a Reply to Adede Cancel reply

Your email address will not be published. Required fields are marked *