Stan pedantic mode

This used to be on the Stan wiki but that page got reorganized so I’m putting it here. Blog is not as good as wiki for this purpose: you can add comments but you can’t edit. But better blog than nothing, so here it is.

I wrote this a couple years ago and it was just sitting there, but recently Ryan Bernstein implemented some version of it as part of his research into static analysis of probabilistic programs; see this thread and, more recently, this thread from the Stan Forums.

Background:

We see lots of errors in Stan code that could be caught automatically. These are not syntax errors; rather, they’re errors of programming or statistical practice that we would like to flag. The plan is to have a “pedantic” mode of the Stan parser that will catch these problems (or potential problems) and emit warnings. I’m imagining that “pedantic” will ultimately be the default setting, but it should also be possible for users to turn off pedantic mode if they really want.

Choices in implementation:

– It would be fine with me to have this mode always on, but I could see how some users would be irritated if, for example, they really want to use an inverse-gamma model or if they have a parameter called sigma that they want to allow to be negative, etc. If it is _not_ always on, i’d prefer it to be on by default, so that the user would have to declare “–no-check-mode” to get it to _not_ happen.

– What do others think? Would it be ok for everyone to have it always on? If that’s easier to implement, we could start with always on and see how it goes, then only bother making it optional if it bothers people.

– I’m thinking these should give Warning messages. Below I’ll give a suggestion for each pattern.

– And now I’m thinking we should have a document (not the same as this wiki) which explains _why_ for each of our recommendations. Then each warning can be a brief note with a link to the document. I’m not quite sure what to call this document, maybe something like “Some common problems with Stan programs”? We could subdivide this further, into Problems with Bayesian models and Problems with Stan code.

Now I’ll list a bunch of patterns that we’d like to catch.

To start with, I’ll list the patterns in no particular order. Ultimately we’ll probably want to categorize or organize them a bit.

– Uniform distributions. All uses of uniform() should be flagged. Just about all the examples we’ve ever seen are either superfluous (as they just add a constant to the log density) or mistaken (in that they should have been entered as bounds on parameters).

Warning message: “Warning: On line ***, your Stan program has a uniform distribution. The uniform distribution is not recommended, for two reasons: (a) Except when there are logical or physical constraints, it is very unusual for you to be sure that a parameter will fall inside a specified range, and (b) The infinite gradient induced by a uniform density can cause difficulties for Stan’s sampling algorithm. As a consequence, we recommend soft constraints rather than hard constraints; for example, instead of giving an elasticity parameter a uniform(0,1) distribution, try normal(0.5,0.5).”

– Parameter bounds of the form “lower=A, upper=B” should be flagged in all cases except A=0, B=1 and A=-1, B=1.

Warning message: “Warning: On line ***, your Stan program has a parameter with hard constraints in its declaration. Hard constraints are not recommended, for two reasons: (a) Except when there are logical or physical constraints, it is very unusual for you to be sure that a parameter will fall inside a specified range, and (b) The infinite gradient induced by a hard constraint can cause difficulties for Stan’s sampling algorithm. As a consequence, we recommend soft constraints rather than hard constraints; for example, instead of constraining an elasticity parameter to fall between 0, and 1, leave it unconstrained and give it a normal(0.5,0.5) prior distribution.”

– Any parameter whose name begins with “sigma” should have “lower=0” in its declaration; otherwise flag.

Warning message: “Warning: On line ***, your Stan program has a unconstrained parameter with a name beginning with “sigma”. Parameters with this name are typically scale parameters and constrained to be positive. If this parameter is indeed a scale (or standard deviation or variance) parameter, add lower=0 to its declaration.”

– Parameters with positive-constrained distribution (such as gamma or lognormal) and no corresponding constraint in the definition.

Warning message: “Warning: Parameter is given a constrained distribution on line *** but was declared with no constraints, or incompatible constraints, on line ***. Either change the distribution or change the constraints.”

– gamma(A,B) or inv_gamma(A,B) should be flagged if A = B < 1. (The point is to catch those well-intentioned but poorly-performing attempts at improper priors.)

Warning message: “Warning: On line ***, your Stan program has a gammma, or inverse-gamma, model with parameters that are equal to each other and set to values less than 1. This is mathematically acceptable and can make sense in some problems, but typically we see this model used as an attempt to assign a noninformative prior distribution. In fact, priors such as inverse-gamma(.001,.001) can be very strong, as explained by Gelman (2006). Instead we recommend something like a normal(0,1) or student_t(4,0,1), with parameter constrained to be positive.”

– if/else statements in the transformed parameters or model blocks: These can cause problems with HMC so should probably be flagged as such.

Warning message: Hmmm, I’m not sure about this one!

– Code has no indentation.

Warning message: “Warning: Your Stan code has no indentation and this makes it more difficult to read. See *** for guidelines on writing easy-to-read Stan programs.”

– Code has blank lines.

Warning message: “Warning: Your Stan code has blank lines and this makes it more difficult to read. See *** for guidelines on writing easy-to-read Stan programs.” (Bob: I don’t think we want to flag all blank lines, just ones at starts of blocks—otherwise, they’re convenient for organizing)

– Vectorization that doesn’t involve first argument. For example, `y[n] ~ normal(mu, sigma)` where `y[n]` is a scalar and `mu` is a vector — it leads to too many density increments.

– Undefined variables. Bob describes this as a biggy that’s a lot harder to code. (Bob: But when we have compound declare/define, then we can flag any variable that’s defined and not declared at the same time. It is much harder to find ones that never get defined; not impossible, just requires a second pass or keeping track of which variables aren’t defined as we go along.)

Warning message: “Warning: Variable ** is used on line *** but is nowhere defined. Check your spelling or add the appropriate declaration.”

– Variables defined lower in the program than the variable was used.

Warning message: “Warning: Variable ** is used on line *** but is not defined until line ****. Declare the variable before it is used.”

– Large or small numbers.

Warning message: “Try to make all your parameters scale free. You have a constant in your program that is less than 0.1 or more than 10 in absolute value on line **. This suggests that you might have parameters in your model that have not been scaled to roughly order 1. We suggest rescaling using a multiplier; see section *** of the manual for an example.

– Warn user if parameter has no priors or multiple priors Bruno Nicenboim suggested this on https://github.com/stan-dev/stan/issues/2445)

– Warn user if parameter has indexes but is used without indexes in a loop, e.g.,
“`
real[N] y;
for (n in 1:N)
y ~ normal(0, 1); // probably not what user intended
“`

– If there are other common and easily-identifiable Stan programming errors, we should aim to catch them too.

Pedantic mode for Rstanarm:

– Flag predictors that are not on unit scale

– Check if a predictor is constant

Comments from Bob:

Most of these are going to have to be built into the parser code itself to do it right, especially given the creative syntax we see from users.

I don’t know if the “sigma…” thing is worth doing. Not many of our users use “sigma” or “sigma_foo” the way Andrew does.

Do we also flag normal(0, 1000) and other attempts at very diffuse priors?

The problem with a lot of this is that we’ll have to be doing simple blind matching. Most of our users don’t use “sigma” (or “tau” the way Andrew does). Nor do they follow the sigma_foo standard—it’s all over the place.

We can pick up literal values (and by literal, I mean it’s a number in the code). We won’t be able to pick up more complex things like gamma(a, b) where a < 1 and b < 1, because that’s a run-time issue, not a compile-time issue, for instance if a and b are data.

Should we also flag “lower=A” if A != 0 or A!= -1? (Again, we’ll be able to pick up literal numbers there, not more complex violations).

Conditionals (if-else) are only problematic if the conditions depend on parameters.

Everyone wants to do more complicated things, like make sure that if a variable is given a distribution with a bound that the bound’s declared in the parameters. So no:

real alpha;

alpha ~ lognormal(…)

because then alpha isn’t well defined. These things are going to be a lot of work to track down and we’ll again only be able to pick out direct cases like this.

Comment from Michael:

I’m thinking of a -pedantic flag to the parser, which we can then call through the interfaces in various ways, so it would make sense to built everything up in the parser itself.

Bob replies:

The patterns in question are complex syntactic relationships in some clases, like the link between a variable’s declaration and use. If we start cluttering up the parser trying to do all this on the fly, things could get very ugly (in the code) very fast, as the underlying parser’s just a huge pain to work with.

Mike:

So what’s the best place to introduce these checks? It’s not dissimilar for what’s already done with the Jacobian checks, no?

Bob:

It depends what they are. I should’ve been clearer that I meant on the fly as in find the issue as it’s parsing.

There are two approaches — plumb enough information top down through the parser so that we can check inside of a single expression or statement — that’s what the Jacobian check does now. To do that, the source of every variable needs to be plumbed through the entire parse tree. If we do more of that it’s going to get ugly. Some of these are “peephole” warnings — things you can catch just locally.

For others, we have the AST and can design walkers for it that walk over the code any old way and report.

Or, we could do something hacky with regexes along the lines of cpplint (a Python program, not something plumbed into the C++ parser, at least as far as I know).

#### Indentation Errors

Catch when a program’s indentation doesn’t match the braces. E.g.,

“`
for (i in 1:10)
mu[i] ~ normal(0, 1);
sigma[i] ~ cauchy(0, 2);
“`

Should flag the line with `sigma` as it’s not in scope.

#### Bad stuff

– Parameter defined but never used.

– Overpromoting. Parameter defined three times.

42 thoughts on “Stan pedantic mode

  1. What is the current state and style of the parser? I thought that there had been some talk at one point about reimplementing it with an actual AST.

    If the Stan compiler is implemented in such a way that Stan code can be parsed into an AST in a front-end, then something like a linter or tidier could be made that was independent of the back-end (maybe live in another git repository even). This sounds (to me) like the normal separation of concerns found in an interpreter/compiler. The Stan code would be parsed to an AST by the front end. This AST would be optimized and converted into C++ code by the code-generating back-end (similar to how modern compilers function).

    But there may be technical debt that makes this possible, but a nightmare to implement (similar to how gcc’s age and complexity can make it more difficult to hack on than llvm.)

    • Hi Eric,

      The parser in Stanc3 does use an actual AST (actually multiple ASTs for different stages), and our architecture does enforce some separation of concerns between the frontend and backend (for example, we can swap in a Tensorflow Probability backend).

      The linter/pedantic mode does operate independently of the backend and could indeed be made into a standalone project. The main reason it’s currently built into the compiler is for user convenience (fewer binaries/installations) and developer convenience (no packaging and versioning headaches). Some programming language compilers do have built-in linters, and Pedantic mode is somewhere between a linter and a warning suite, so there’s some precedence.

      • Cool! I’ve been meaning to take another look at the OCaml parser in Stanc3. I didn’t realize that it was ready for prime-time (it’s been a long while since I’ve tried to understand the code).

    • We’ve separated parsing, AST, and code gen from the get go in stanc (C++, Spirit Qi framework). Recently, stanc was replaced with stanc3 (OCaml, Menhir framework). The original stanc used an AST that was very close to the language, whereas stanc3 uses more processed intermediate representation layers. stanc3 does much more of the error checking in standalone second passes, whereas stanc made the unfortunate design decision to do most of that checking online during parsing (using the AST types, but before a complete parse).

      So there’s not much technical debt now that there’s a clean OCaml implementation. It’s been much easier to use to add new features than the old parser, which was tied up in arcane C++ implementations of lazy generic containers with variant types.

      Not to give the game away, but there should be an announcement of an alternative to the Stan math library as a back end, which highlights the flexibility of the new AST.

  2. A few unrelated comments:

    1) Bob says that the warnings should be raised when the program is parsed. This seems to make sense (though if he had said the opposite I would nod my head to that as well.) But since I routinely compile a model once, close everything, then load a cached version of the model again I would vote that the warnings be saved in the model object itself and displayed whenever someone does model.summary() or similar. This way, I have a chance to assess whether a model is reasonable even if I (or a collaborator) miss it the first time.

    2) The warning on uniform priors should, ideally, also be raised for quantities that are assigned an implicit uniform prior — I recognize that this may be trickier to implement. But this would be helpful to catch.

    3) Continuing along the trajectory of warnings that may be harder to implement and less generally useful, it could be helpful to raise a warning if a user applies the same prior to different parameters of the same distribution. For example, if
    y ~ normal(mu, sigma);
    mu ~ normal(0, 1);
    sigma ~ normal(0, 1);
    then the parser could raise a warning that the user is applying the same prior to a location and scale parameter which doesn’t in general reflect best practices for prior distributions. This is a bit in your face and preachy, but I’ve worked on several problems where this is the default impulse of collaborators.

    • 1) I think this is a really interesting point.

      2) This is actually implemented now. It’ll say something like, “Parameter x has no priors.”

      3) Could be. We could also suggest that it’s a little unusual for a scale parameter to be normally distributed – but, we do want to keep false positives to a minimum, at least by default.

        • Zhou:

          An exponential could be ok too. Actually, exponential is our default prior for scale parameters in rstanarm. Sometimes for hierarchical models I want something stronger, because long tails on the posterior for the scale parameter correspond to occasional very small amounts of partial pooling, which can create computational instability. But it depends on the problem; this is still an open area.

        • Dan Simpson gave a talk last week and said someone did some simulation studies showing exponential is probably better for hierarchical standard deviations.

          The different was in interpretation of tails, so in the theme of what you’re saying. I don’t know if there was a reference for this. Probly just e-mail him if curious.

        • “showing exponential is probably better for hierarchical standard deviations”

          As a general rule? “Better” in what sense? I have been using half-normal for hierarchical group standard deviations.

        • > I have been using half-normal for hierarchical group standard deviations.

          Me too, that’s why it stuck out to me when I heard it.

          I dunno the reference, but I think it was some simulation studies generating and fitting data with a hierarchical model with half-normal, exponential, and I *think* but I forget exactly something with heavier tails, probably a half-student-t?

          So in hierarchical modeling we’re ostensible not sure if something has zero variation or lots of variation.

          I think if something actually has zero variation (this was simulated data I believe so it was controlled), the result was half-normal and exponential do fine, and the heavy tailed third thing was questionable. If the something had variation, then exponential and the heavy tailed third thing did fine.

          So somehow exponential was a nice compromise, and the interpretation was something about tails.

        • > So somehow exponential was a nice compromise, and the interpretation was something about tails.

          hmm, ok. I guess it makes sense if you program in zero variation that a prior with mass towards zero works better and if you program in lots of variation, that a longer tail prior is better… I guess I was preferring defaulting for conservatism and more partial pooling in the ‘real world’. Maybe it depends on your application? When I think about something like logistic regression, needing long tails doesn’t really seem so plausible, right? For example in my logistic regression, if I’m seeing a standard deviation of 2.5 for varying intercepts for a group, that seems like a lot. Since I’m not only interested in seeing what the variation is at the different levels, but also in partial pooling, do I really want my prior to better capture this variation or to help reign it in slightly (since most likely that variation could be due to sampling error or something else)? Does that make sense?

        • Yeah what you’re saying makes sense.

          Another thing Dan was saying was exponential turns on a constant rate.

          The normal distribution starts out flat and then starts regularizing really hard at some point.

          I’m not sure. This is the paper he was presenting on: https://projecteuclid.org/euclid.ss/1491465621 . Maybe it has references for the normal vs. exponential thing. I admit I didn’t read it just listened :P.

        • > The normal distribution starts out flat and then starts regularizing really hard at some point.

          Yes, that makes sense to me – expecting some small amount of variation but regularizing large variation. That seems like a good thing for the prior to do.

          > Another thing Dan was saying was exponential turns on a constant rate.

          I’ll have to think about that…I’m not sure what that implies of terms of the regularization. More mass in a longer tail I guess..
          Thanks for the link and the responses!

        • Both half normal and exponential seem weird to me. When is 0 the most probably value for a standard deviation?

          I almost always use gamma(n,(n-1)/s) which has s as its most probable value and n as a shape parameter for more or less peakedness. n should be bigger than 1

        • Well it’s not gonna be exactly zero, but the total pooling vs. non-pooling comparison is with something with zero variance and then something with non-zero variance, so including zero in the partial pooling as a possibility seems in line with that thinking.

          That’s what Dan was saying. The paper he was talking about was here, so that might have more: https://projecteuclid.org/euclid.ss/1491465621

    • @James DG

      For (1), I’d recommend not saving a cached version of a problematic model. We could persist compilation errors for later reporting in the interfaces.

      For (2), we want two separate warnings. The first is putting lower and upper bounds on a quantity, which is usually problematic both statistically and computationally. The second is having an improper prior. Either one can work, so they’re both nags about semantics.

      For (3), we generally recommend scaling parameters to unit scale where a standard normal or half-normal makes sense. It greatly speeds up adaptation, which starts from an assumption of unit scale by default. Like (2), I agree it’s nagging. That’s why we’ll be able to turn pedantic mode off.

      @Ben

      Dan and the paper also said that thinner tails were OK if your variables really stayed in that range. That’s why I was asking him about whether this was just for robustness. He liked the exponential because it’s scale free, meeting one of the desiderata of his penalized complexity priors.

  3. So this is sort of like errors vs warnings? Often the compiler lets you set the level of verbosity too?

    Finally comparing to other ecosystems don’t some other languages ( perl?) have a strict mode like this?

    Overall, sounds like a great idea!

    • Yes. This is warnings. But it’s more like linter warnings (like the cpplint standalone program we use on our C++ code) and warnings from the actual compiler.

      We’ll definitely be able to turn them off. There hasn’t been a move to prioritize them to allow levels of reporting. The bigger deal is usually having some notation to shut off the griping from within the program so that you can write a code that intentionall violates a convention on purpose and not get griped at every run.

  4. “– if/else statements in the transformed parameters or model blocks: These can cause problems with HMC so should probably be flagged as such.”
    Could you elaborate on this?
    Are there any tricks to replace if/else statements with something more HMC friendly?

    • The issue is when the program control flow (including e.g. ifs and loops) depends on the value of the parameters. That can create discontinuity in your density function (unless you are very careful), which isn’t differentiable (the autodifferentiator tries, but it can’t work very well). To avoid the issue, you can make sure that the density lines up across branches, or you can sometimes find a smooth mathematical alternative to using control flow.

  5. In terms of use of the uniform distribution, a warning about the sampling issue (“The infinite gradient induced by a uniform density can cause difficulties for Stan’s sampling algorithm”) is useful and informative: it would make me think things like “how serious are these issues? what can I do to address them when I need to use a uniform prior?”. A warning about my judgement (“it is very unusual for you to be sure that a parameter will fall inside a specified range”), on the other hand, is a little patronizing: it would make me think things like “ok, what I’m doing is unusual: so sue me”, and would lead me to just ignore the warning, which probably isn’t the outcome you want.

    • The gradient of a uniform density is constant and finite. The gradient of the transform from a lower and upper bounded interval is a scaled and translated inverse logit. The gradient is inv_logit(x) * inv_logit(-x), which is pretty well behaved.

      All of these warnings are for practices we’ve found problematic in lots of cases. They’re not tecnically errors. That’s why the mode’s called “pedantic”. We just want to give people a heads up on potential problems where we can flag them. We realize users are then going to use their own judgement (or that of their supervisor or journal editor).

      • Ok, now I’m confused. The example warning given by Andrew in the OP said: “(b) The infinite gradient induced by a uniform density can cause difficulties for Stan’s sampling algorithm.” Was that wrong? I took this to refer to the gradient at points on the bounds of the interval, which is where I guess problems arise.

        The reason I ask is I’ve used stan to test models of how people infer event probabilities from samples (and samples alone), where the uniform distribution seems like a natural prior for those probabilities. If this causes difficulties for the stan sampler, should I change my approach?

  6. To what extent can scale transformations be done automatically? Is there a reason to prefer unit-scaled quantities for reasons other than numerics? (Obviously relevant for visualization, but I’m not convinced that that’s relevant here.)

    I strongly agree with Bob on the usefulness of empty lines. My code is full of them!

    > I don’t know if the “sigma…” thing is worth doing. Not many of our users use “sigma” or “sigma_foo” the way Andrew does.
    What on earth do people use instead?

    • I’ll sometimes use “scale” or “sd” instead of tau/sigma; additionally, I’ll sometimes go with postifxed or (rarely) infixed sigmas, depending on what makes the most sense for the problem.

  7. I’m not a statistician but I do design programming languages.

    1. Make as few pronouncements about code formatting, indentation and so on as possible. Users perceive these as petty, arbitrary
    restrictions that make the program harder to use. Instead you should try to make your software accept the most heroically mangled input you can imagine. Partly, it should be a point of pride that you can put out a good product. More importantly, there’s always the possibility that some user has a perfectly legitimate, hitherto unforeseen reason for writing code that way.

    2. Efforts to mandate good coding practices always backfire. If you want a one-sentence summary of the history of programming language design, this is a pretty strong contender.

    3. Specifically, most people will look at the warnings you’re proposing as extra work. Then they’ll ignore them, and they’ll probably do it by concluding that warnings in general issued by this program are a waste of time. Warnings are an important line of communication. Don’t devalue them.

    4. Historically, the best programming languages (C, lisp, Algol) have been targeted at high end of the programming proficiency distribution. The worst (Cobol, Java) are explicitly oriented to average or even below-average developers.

    • Jed:

      We do put out a good product—and it’s free! We want to make it an even better product. Currently, one problem with Stan is that users (including me) put bugs in their code that could be easily caught. There’s a reason that programming languages have linters and word processors have spell checkers. You talk about users. I’m a high-end Bayesian modeler and I still make lots of errors that would get trapped by the linter. I’m a high-end writer but I still get good use out of the spell checker.

      • I think Jed is not saying “don’t provide warnings” he’s saying provide warnings that are almost always *problems with the semantics* and not problems with “style”.

        So the example of people doing a vectorized operation in a loop… that’s almost guaranteed to be just them leaving out the index by accident.

        the example of using a lower and upper bound that isn’t 0,1 … that’s definitely NOT a semantic problem. If someone put in lower and upper bounds, they probably meant it.

        • Daniel:

          I take Jed’s caution about giving warnings about formatting. One issue here is that our ideal audience for Stan is high-end researchers and some high-end statisticians, not necessarily high-end programmers. High-end researchers and statisticians might still want to use Stan’s indentation conventions etc. so as to make their programs easy to read. But indentation isn’t such an issue, given that Emacs and RStudio have .stan modes already.

          Regarding upper and lower bounds . . . it depends. I’ve seen lots of examples where people will want to give a parameter a prior like uniform(-2,5), and every time I’ve seen this, it’s a mistake, as you don’t really know that these are hard bounds. I think this sort of hard constraint is typically a statistical error arising from old-fashioned thinking about how to specify a model.

        • Right, so here you’re offering *modeling advice* rather than *programming advice*.

          It’s undoubtedly the case that when they put uniform(-2,5) they wanted a uniform prior on [-2,5], so this was not a programming error.

          it may have been a really bad idea statistically, but it wasn’t the kind of error where someone did

          for (i from 1:10){
          foo ~ normal(a,b);
          }

          where foo is an array of 10 elements, and they undoubtedly meant:

          foo[i] ~ normal(a,b);

          that’s a programming error.

    • 1. Unit scaled variables are easier for numerical computation. They have more predictable properties in terms of floating point error and tend to induce posterior/likelihood geometries that are easier for monte carlo samplers and derivative based maximization.

      2. Unit scaled variables are more interpretable for the relative importance of a variable in predicting within a dataset

      3. Prof Gelman gives a great argument for scaling continuous variables by 2 standard deviations as a reasonable default to make their coefficients and standard errors directly comparable to those for one-hot encoded categorical variables.

      http://www.stat.columbia.edu/~gelman/research/published/standardizing7.pdf

  8. Here is what is happening with me these days:

    1. I want my students to do some ridiculous things just for educational purposes (specify absurd priors) to see what happens. I would not want Stan or brms to prevent my students from trying this out. But it would be OK if some warnings are printed out. In some cases, I saw that Stan/brms tries to prevent you from doing absurd things. This was a bummer for me.

    2. I often do no Stan coding for months because I am busy writing up the results of already completed modeling; this writing process can take many months. Reviewers want stuff, etc etc. Then I return to the code and discover that something fundamental has changed somewhere in Stan or brms or R or even the OS. Obviously, I don’t feel I need to go and read the documentation every few months (after all, I am a big-shot expert), but it seems I have to somehow keep up with the developments while I am away from the command line for a few months. It would be really cool for people like me to get some kind of your-an-idiot-level verbose warning or indication that I have fallen behind times.

    PS On a side-note, my entire Stan workflow is broken right now, due to all kinds of weird interactions between Stan, R, RStudio. I partially fixed it, but am forced to run all my Stan code from the command line. Using RStudio has become impossible due to it not being able to play nice with the parallel library. I have seen some supposed workarounds on Discourse for this but they don’t work for me, so it’s all at a dead-end for me. I must say I was and remain really frustrated by the brokenness of everything right now. I am writing a book that uses Stan and brms etc, and this morning I just hit compile (=ran a bash script on the command line), and everything stops working, new Stan error messages print out. It’s all a mystery; it’s literally like one morning everything stops working. Maybe I am getting old, but I feel I need more informative messages from the software.

Leave a Reply

Your email address will not be published. Required fields are marked *