I received the following email:
These compressed sensing people link to Shannon’s advice. It’s refreshing when leaders of a field state that their stuff may not be a panacea.
I replied: Scarily enough, I don’t know anything about this research area at all!
My correspondent followed up:
Meh. They proved L1 approximates L0 when design matrix is basically full rank. Now all sparsity stuff is sometimes called ‘compressed sensing’. Most of it seems to be linear interpolation, rebranded.
I wrote back: But rebranding/reframing can be useful! Often reframing is a step in the direction of improvement, of better understanding one’s assumptions and goals.
> “Most of it seems to be linear interpolation, rebranded.”
Um, that’s not accurate at all. Perhaps your correspondent means to say “L1 regularization/LASSO, rebranded”, which would have some truth to it (although a lot of the field could not be described as such).
Yes, many of the modern “compressed sensing” results could have been figured out in the 90s. But they weren’t. The “compressed sensing” term came with substantial new ideas — both technical and motivational — that got a lot more people excited about the area and figuring stuff out.
Might be helpful to relate it to variance reduction methods in Monte Carlo and Guassian quadrature (systematic rather than random approximation) – which I beleive is what they _grew_ out of.
I think you are right, sometimes the rebranding helps going forward.
That being said, the generic statement that Compressive Sensing is the same thing with a different name has often been made. You might enjoy reading this as many researchers in the area have been formulating an answer to that specific question
But eventually, this narrative, while important, doesn’t address exactly the point you are making, i.e. Rebranding helps going forward on the issue. In particular, one of the most impressive outcome of these follow on studies, thanks to “rebranding”, is the discovery of sharp phase transitions on the computability of solutions as found by Candes and then Donoho and Tanner
It is clear that all these insights are not, at first, terribly important when one is given a design matrix and attendant data (we could have a longer discussion on that) but for the rest of us designing sensors/hardware, devising the means of getting meaningful data, making sense of those data, those steps simply did not exist in such a structured way before and are becoming invaluable by the day.
Yes, exactly. My post was not meant as a criticism of research in remote sensing. Reframing is important, in fact it’s a lot of what I do in my own theoretical and methodological research. There’s a big difference between someone solving a problem once, and someone putting it in a general framework that can be applied in different ways (as pointed out in the final sentence of Igor’s comment).
It was not received as a criticism, far from it. A few people like your reader have made that point and I think the community has responded very well to that characterization.
One of the most recent uses of the sharp phase transitions came from a study by Carson Chow, Steve Hsu et al (http://nuit-blanche.blogspot.com/2013/10/application-of-compressed-sensing-to.html ) connecting phenotypes and GWAS that aims at finding out how much sampling should be required given their model. One could argue on this issue of rebranding over and over, but it has certainly brought a new outlook on a problem that looked desperately empirical.
Andrew: How do you think rebranding helps?
That wasn’t a rhetorical question — I genuinely don’t understand. I usually find it super confusing as a beginner seeing the same thing referred to all sorts of different ways in different literature.
For example, I was massively confused learning regression due to all the different ways people talk about it. For instance, MacKay’s book casts logistic regression as “learning with a single neuron” coupled with “sigmoid activation.” Hastie et al.’s book uses “ridge regression” for what others call “L2 regularization” and what Bayesians (with a different philosophical spin but the same exact computations) call posterior modes with Gaussian priors.
Rebranding does help drive up publications and a false hype at having somehow invented a new technique.
The disadvantage of rebranding is, as you note, it can be difficult to translate back and forth when reading different descriptions of a method. Recall my struggles writing up VB and EP, translating from the machine learning perspective to the statistics perspective. There’s also an annoying way that textbook writers (and researchers more generally) often don’t give enough credit to researchers in other fields who’ve developed the same methods using different notation.
The first advantage of rebranding is that, most obviously, it can bring a method to a new audience. Anything Bayesian might annoy some people, but if you call it regularization maybe that will make some people more accepting.
The second advantage of rebranding is, that as Igor wrote, if a method is adapted to a new problem—and this adaptation might take some rebranding so as to be able to express the methods in the language of that application area—this will get new researchers involved and can lead to improvements in the method.
The third advantage is that, if a method is conceptualized in a different way, it can be changed and improved. Consider two examples that are central to BDA: (a) the move from “empirical Bayes” to “hierarchical models,” and (b) the move from graphics as exploratory data analysis (unconnected and even opposed to modeling) to graphics as model checking. In each case, I think the new perspective allowed the existing ideas to be developed in useful and unexpected ways.
Thanks for the clarification.
The second and third motivations seem very similar to me and seem more like real innovation than “rebranding”. I have very negative connotations for “branding,” thinking of it as selling soap or TVs with different labels. I would’ve called this “translation.”
The first advantage makes sense to me if you have to convince those with strong philosophical scruples. I used to translate theories in linguistics all the time, showing that two different approaches were mathematically equivalent. Linguists struck me as strong believers in representations, even if two representations were isomorphic (e.g., empty categories vs. metarules in GPSG or HPSG, c-command vs. feature percolation in GB vs. LFG, or complete atomic boolean algebras vs. power sets in the semantics of plurals). Often the linguists had the opposite reaction and stopped using a technique when they saw it was philosophically equivalent to something else!
Umberto Eco coined the term transmutation when what is being representationed is somewhat changed (conceptualized in a different way) and he agreed in Oxford that this likley induces creativity (i.e. less wrongness more likely discovered).
I’ll believe anything about play with words and ideas by Umberto Eco, but “transmutation” has clear Latin roots and is no sense his coinage. In English senses in alchemy and law go back at least to the 15th century.
If the only side effect of the rebranding is solely “driving up publications” (clearly this is also saying that publications do not have much value), then yes any sensible person would be right with the point made by the original author. However, within the context of compressed sensing, it’s really not what happened. Initially the LASSO folks almost always took the design matrix as a given. In part, this was due to the fact that the data collection effort was in effect removed from the analysis stage. And for those people, indeed calling it compressed sensing, is akin to making it more sexy than it really is. However, the LASSO was used in other problems where one could **have access** to the design matrix. The reason it was not called compressed sensing before 2004 is that while we knew that specific fields could use LASSO to obtain better reconstructions (results), it did not help. The landscape changed in 2004 with the papers of Candes, Romberg, Tao and Donoho which showed what were the admissibility conditions for the design matrices. In turn that spurred a lot of interest because now people had a guidance as to where to look to have more efficient sensing mechanisms (i.e. better design matrices). The phase transitions I mentioned earlier were also found in 2004 but were given a more universal touch in 2008 and they showed in some exquisite manner what to expect in terms of computability, i.e. the space in which those design matrices ought to work well. Aside from the nice side effect that questions like P vs NP are not left to the mathematicians only, this has a direct bearing on sensor designs, a hardware field far removed from mathy “concentration of measure” type of results which would probably not have been impacted that early if the wording LASSO had been kept. The other problem with LASSO as a name is that quite a few solvers within CS do not solve the L1 problem anymore, they go directly for the kill: L0 (see a few of those listed here: https://sites.google.com/site/igorcarron2/cs#reconstruction )
tjis issue of fields mixing was very well featured in what I call a Donoho-Tao moment (excerpted from this 2008 IPAM newsletter http://www.math.ucla.edu/sites/default/files/newsletter/nl2008.pdf)
“…. It’s David Donoho reportedly exclaiming [to] a panel of NSF folks “You’ve got Terry Tao talking to geoscientists, what do you want?” ….”
If rebranding means we have more of these “moments”, I am all for them.
Thanks for the background. I wasn’t aware of the details. Interesting indeed.
Though, in light of this, I’d rather not call it rebranding. Perhaps this is fortuitous rediscovery? I don’t know. To me the term “rebranding” has a cheap, pejorative ring to it but perhaps that’s just me.
In fact, compressed sensing is pretty much a reality show of The Emperor’s New Clothes – the beauty is invisible. Everybody is told and amazed by and talks about its beauty but nobody has ever seen.
Everybody shouts “what a pattern, what the colors”, but nobody can tell what IS the pattern and what are the colors.
Where on Earth do you get this idea ?
The Earth is the Truth.
Everybody loudly talks about the emperor’s new clothes and exclaims “Look! What a pattern and what colors!”
But someone cannot see the beauty of the clothes. He only can see the emperor’s ugly body. Then he asked “Excuse me, what is the pattern and what are the colors, please?” Nobody answers him, but everybody throws stones at him.
This whole story indeed happened here:
Read all comments there, really interesting.
Yes, by all means, do read all the comments.
Watched the war. It reminds a lot. Thanks for the link.
Pingback: dismissing research communities is counterproductive | An Ergodic Walk
Not certainly. Sometimes it may be productive. It depends on whether the research is in the right direction.
Indeed, many claims of compressed sensing are misleading, even though they are loudly and frequently said (Xiteng Liu). Almost 10 years still has made no real, applicable fruits. It needs a serious check.
Scott (if that is your real name),
Please pray tell what claims are made in CS that are not substantiated in actuality ?
How does it matter whether Scott is his real name or not?
I just love the way some of our threads develop in unexpected directions!
Relax. It’s a joke.
i am still waiting for the unsubstantied claims made in CS as witnessed by Scott.
It could matter a bit whether Scott is his real name. Commenters “Scott” and “George” here have the same IP address and seem to be saying the same thing in the same sort of broken English. “Scott Wolfe” is a generic sort of name, but a quick google search reveals nothing related to this topic. “George Stoneriver” seems to have no internet presence at all (besides the comments at this blog).
Looks a lot like a sock puppet to me. I have no problem with pseudonyms, but sock puppets are not cool.
That was Xiteng Liu’s comment on which I agree now.
Visit his website http://qualvisual.net, download the slide file “System compression: a new computational phenomenon”.
In concise language, he disputes some misleading claims (or say proclaims) in CS, including random sampling, l1-minimal, Shannon theorem and especially performance.
His works and comments remind me a lot.
Quite simply Xiteng totally misunderstands CS and uses his faulty definition of it to then argue that the technique he is developing is better. It really is a typical straw man argument. The LinkedIn discussion shows in great detail that I was not the only one trying to convince him that what he felt was CS was in fact not. I wholeheartly invite any person interested in the subject to read the thread for one’s own edification.
You really distort the truth, which is that Xiteng’s working results clearly reveals compressed sensing story is a replay of The Emperor’s New Clothes.
People ostensibly pretend to have seen the beautiful clothes, and also pretend to be enjoying talking about it, so as to attract more innocent people to join in the cheating game.
Initially, Xiteng simply intended to ask about new progress on CS which is defined to be better than the one developed in Rice University. Obviously, he was very dissatisfied with the Rice results. Indeed, the Rice CS results perform too poor, compared with Xiteng’s work.
The war quickly started. Without addressing his inquiry, many people including yourself began to throw stone at him. Quite a few even wielded language violence.
That is what I watched from the LinkedIn discussion. It is so different from your description, right?
Honesty principle should be abided by in both scientific research activities and discussions.
We could restart the LinkedIn discussion over again but no amount of explanation seem to get you to understand that CS is primarly about a generalized sampling mechanism beyond the raster mode that also happens to be linear. Xiteng’s solution is in the family of what is called transform coding which happens to be a nonlinear process on top of the raster sampling mode.
On the hardware/sensor level, we have a need to go beyond the raster sampling process which Compressed Sensing is addressing. Solutions for the reconstruction of raster mode + nonlinear compression are simply not responding to that need. One should note the purposeful omission of that basic fact in the large number of discussion entries by Xiteng in the LinkedIn discussion. If you have read that discussion, you will probably notice that I am not the only one who points out this simple, honest fact.
As a naive outsider, what are application oriented successes of compressed sensing? Some examples where it does significantly better than the status quo?
Just curious. Wikipedia etc. is a bit light on this.
There are different stages of maturity of the technology but some obvious and more mature ones right now would include:
The MRI example in this Wired piece ( http://www.wired.com/magazine/2010/02/ff_algorithm/ )
Cheaper SWIR cameras at http://www.inviewcorp.com
please note that in the first example no hardware change is needed while in the second one, the whole apparatus is different than a “normal” SWIR camera.
Third joke! Igor is not only a great magician, but also a good comedian.
The “MRI example” is simply a failed magazine propaganda of CS, incurred tides of criticism in comments.
The “SWIR camera” is simply the CS results of Rice University. Its performance can be seen at http://dsp.rice.edu/cscamera or http://qualvisual.net/Rapid.php. It is “the emperor’s ugly body,” reflecting the fact that our emperor is naked.
The InView company is nothing but a loom, without silk or gold on it.
I agree with you that these applications look iffy.
One reason I would tend to believe in CS is Terry Tao though. That guy’s so good that if he’s lent his name to the work it’s hard to imagine it’s a dud. That guy’s no fake nor an idiot.
The name calling you keep on using comes only from you. i note your continuous use of this blog comments to spam readers with Xiteng’s commercial endeavors.
Calling an MRI scan that saves some kid’s life iffy is a little problematic in my book. Me thinks no amount of blog comments will get you to think seriously about it.
It was nice “talking” to you both,
The “saving a kids life” jab is unfair. Sure saving a kids life is valuable. So are MRI’s. What I’m trying to judge is how much value does CS add to the default MRI algorithms. I don’t think that’s in any way problematic.
Since you started it, I’ll say this: “Let’s wait and watch. If even 10% of new commercial MRI machines adopt Compressed Sensing as their default strategy in a few years from now, I’ll concede it’s valuable”.
Till then it’s yet another research curiosity and the jury is still out whether it makes any applied sense.
Fair enough, but it’s not a good sign that the other side of the argument is being taken by a pair of sock puppets. That alone does not prove anything but I think it provides some evidence.
CS brings a potential to drop MRI measurement times and cost in reconstruction. CS reconstructions will need less measurement time on the scanner and a better resolution compare to original. MRI was the original starting point of CS, practitioners observed the working algorithm and maths behind it proven later by Candes-Tao-Donoho papers.
Have a look at Dr. Lustig’s works:
You can’t change the infrastructure over night, I think we will see in a decade or so, CS being used routinely.
So I recommend you to do your homework well before attacking a body of scientific work. Philosopical issues of re-branding or popularity is something different; but you are being ignorant for not seeing the “revolution”.
For “Raul” read “Rahul”.
Less transiently: It is unfortunate that people in compressed sensing seem to regard it as obvious that “CS” means “compressed sensing” to everyone. Outside that field, I doubt that abbreviation is well or widely understood. In fact I read it here first as meaning “computer science” or “computing science”. (Context makes it obvious that a particular unpleasant gas is not implied.)
Please stop the madness. Is this the first time you get to see an acronym collision in engineering and science ? Has that resulted in either of these acronyms being replaced ? And what about this awful characterization: Does using tools of compressive sensing make one “one of these people in compressed sensing”, good grief. Just try it out, there are hundreds of sparsity recovering solvers available for anybody to download (it is probably one of the first community that has released so many of its solvers in some form that is compatible with reproducible research, an item I seldom see in other areas of engineering and science.)
Maybe this paper can enlighten you: Unveil “Compressed Sensing”, http://arxiv.org/ftp/arxiv/papers/1311/1311.5831.pdf .
Please stop spamming us. This is reprehensible behavior. I can only assume that Xiteng Liu would not appreciate that someone is spamming websites on his behalf. It’s not good for someone’s reputation to be associated with a sock-puppet. I know that if someone were spamming on behalf of my research, I’d be really annoyed.
The arXiv manuscript you refer to is not a publication. Do you have some references which is published, i.e., passed peer-review?
Is it also a joke to say “Xiteng totally misunderstands CS”, provided the fact that his work so formidably dominates compressed sensing works? See it at http://qualvisual.net.
If anyone happens to be reading this deep into the comments, please note that the above is not, as it appears, a discussion between Igor Carron and two people named Scott Wolfe and George Stoneriver. Rather, it has been Igor Carron (who is indeed a real person) patiently dealing with someone who seems to be spamming us using two sock puppets to promote the work of Xiteng Liu. Given that the thread has gone on so long, it seems too late to simply delete it. But I really hate spammers. Please go somewhere else with this, stop wasting our time! If you think your work is so damn good, just start up your own blog, don’t bother us any more.
Nesting means I can’t reply to Igor’s comments directly.
“Please stop the madness. Is this the first time you get to see an acronym collision in engineering and science ? Has that resulted in either of these acronyms being replaced ? And what about this awful characterization: Does using tools of compressive sensing make one “one of these people in compressed sensing”, good grief. Just try it out, there are hundreds of sparsity recovering solvers available for anybody to download (it is probably one of the first community that has released so many of its solvers in some form that is compatible with reproducible research, an item I seldom see in other areas of engineering and science.)”
Igor: I think you are being oversensitive here. Also, your quotation is misquotation and you impute negative tone not present and not intended. I am just saying that assuming that “CS” without explanation to be widely understood as “compressed sensing” is likely to be incorrect. Characterising my statement as “madness” is abuse of language. I won’t treat it as offensive because I guess you are somehow conflating my comments with previous comments on this blog with quite different content and motivation. I have no animus against compressed sensing, which I have never studied.
I agree with you: being misled by abbreviations or acronyms is not new and won’t stop, but that does not rule out requests to think carefully about which you assume understood. “Good grief”, to quote you, it is precisely why such requests are made.
I apologize if I offended you and you are right in that I am sometimes being overly sensitive.
I assumed that using the CS acronym within the comment section of a blog post on compressive sensing would not be absolutely inappropriate but I can see where it would be confusing.
As you have rightly pointed out, it is sometimes difficult to distinguish a fair criticism from say … not so constructive inputs.
What really concerns me from a mostly outsider’s point of view is that to non specialists, the strawman argument on compressive sensing seems to get more traction than any useful paper making real headways. When I see @Rahul making a typical “balanced” statement about compressive sensing being iffy when it has already saved somebody’s life, I am at loss on how to make it clear to any outsider that the on-going conversation she/he is seeing/reading is not between two specialists trying to make a valid point. Balancing two views, one of which is strongly “biased” cannot reasonably bring out insights to the readers. This is an issue worth considering in the current debate on open peer review processes.
When I first saw “CS” in this thread, I thought “computer science” too! In any case, though, there’s no excuse for that guy who tried to take over the thread using sock puppets. That was horrible. If Scott Adams wants to use a sock puppet, fine, the guy’s an artist and we should cut him some slack. But to use sock puppets to try to trash legitimate scientific work, that’s not cool.
Igor: Thanks for this. As said, I didn’t take your comments as offensive: the only way they made sense is that you thought I was coming from a completely different direction.
In this thread, I was asked earlier about application of compressive sensing. Here is one that will be all the over the news without being stamped “compressive sensing”: the Time of Flight camera at MIT in group of Ramesh Raskar. The project page is here: http://web.media.mit.edu/~achoo/lightsweep/
For those of you that are interested in understanding why it has a different flavor than, say, traditional computational imaging, you really want to look at figure 7 of the paper . It shows quite clearly that the solvers devised in compressive sensjng are key to enable the technology featured in the paper. The first two figures use the traditional least squares solvers while the remaining are different flavors of sparsity recovering solvers.
Pingback: Sleazy sock puppet can't stop spamming our discussion of compressed sensing and promoting the work of Xiteng Liu « Statistical Modeling, Causal Inference, and Social Science Statistical Modeling, Causal Inference, and Social Science