4 thoughts on “Here’s some serious statistical computing

  1. Andrew, Aleks:

    This may not be relevant to your effort but in the past few months, there has been some regained activity toward performing Lp regularization with p less than 1. The methods use iterated reweighted least squares schemes which used more traditional tools than using the amazing linear programming steps Stephen Boyd et al do. In the compressed sensing business, it was found that this led to increased sparsity in the reconstruction of the signal.i.e fewer linear combination of measurements (CS measurements) were needed to reconstruct the original signal.

    Rick Chartrand at LANL has several articles on the subject:
    http://math.lanl.gov/%7Erick/Publications/

    I mentioned it here:

    http://nuit-blanche.blogspot.com/2007/12/compress
    (check the comments as well).

    For more on compressed sensing:
    http://www.dsp.ece.rice.edu/cs/
    http://nuit-blanche.blogspot.com/search/label/com

    Igor.

  2. Igor,

    Thanks for the links.

    For the purpose of modeling, it's not clear to me that sparsity is a goal. I'm actually happier to have more things included in the model. It's easier to explain why something is in the model than to explain why it's left out.

    These models might fit well, though. I'm not slamming the models, I'm just questioning the (implicit) goal of sparsity in reconstruction, at least for modeling and prediction purposes. (For signal-processing and data-reduction purposes, I can see why sparsity is desirable.)

  3. Andrew,

    Because we deal with engineering data, the actual goal of most compressed sensing schemes is to deal with signals that are compressible (not strictly sparse) i.e. signals that have coefficients that follow power laws in different bases of interest. Hence the interest in finding bases that produces sparser representation of edges in the imaging world ( http://nuit-blanche.blogspot.com/2007/12/compress… ) i.e. have better decaying exponents (like curvelets as opposed to just wavelets). So to be clear, there is no implicit goal of sparsity, rather, in the imaging world, there has been findings from the physiology of the cortex that we do seem to be using bases that yield sparser decomposition of scenes ( http://redwood.berkeley.edu/bruno/research/ ). Sparser in this context means that natural images have some sort of power law decay in some bases that our sensors (eye) seem to have evolved in emulating.

    In the context of compressed sensing, the L1 reconstruction techniques (Basis Pursuit and related schemes) have been proven to recover exactly sparse decomposition, by extension much of the work has been focused on compressible signals. as well. The expectation would be for Lp techniques to produce sparser decomposition for these compressible signals.

    Igor.

  4. I'm surprised that the posterior modes method is not used more often. The program Latent GOLD uses it for latent class, and it gives good results as a result of keeping the outcome probability estimates away from the boundaries.

    The main criticism seems to be the use of asymptotic results, which always seems a bit obsessive unless there is small amounts of data and the model is perfectly correct. Is a possible solution to profile the posterior distribution, similar to the way a profile likelihood is constructed ?

Comments are closed.