This is Jessica. Florian Echtler put together a list of weird human computer interaction papers that’s too good not to share. Urinal games, robots powered by household pests, and closeness with your remote partner through synchronized trash bins.
I recall other projects from HCI researchers that have stuck in my memory because there was a compelling physical or ethical dimension beyond the sheer novelty. For instance, can you teleport personal inanimate objects with 3D printing such that the object remains unique and the physics seem to work out? Then there’s the creepier muscle hacking examples where you agree to let someone else control your limb for awhile.
I guess it’s a luxury in computer science that this kind of thing counts as research. We have conferences where novelty and new interaction modalities are highly valued. For one particular conference, the standard thing people say (or at least used to say a few years ago) about what gets work accepted is that some aspect of it should “seem like magic.”
Some of our more mundane conferences include an “alt” track or special alt venue to provide, e.g., a “forum for controversial, risk-taking, and boundary pushing research” and publish “bold, provocative, unusual, unconventional, thought-provoking work.” Often work in these tracks is less novel on the tech side but still deemed a little too unconventional to send to the main tracks. As an example, one of my favorite visualization-related papers in this category is by Micheal Correll on Ross-Chernoff glyphs. Chernoff faces are a mostly terrible glyph based approach to visualizing multivariate data where map data variables to facial properties like the width of eyes, angle of eyebrows, shape of face etc (which people are pretty bad at perceiving separately). As Michael points out, this hasn’t prevented the original Chernoff faces paper from receiving thousands of citations. Ross-Chernoff visualizations instead plot multivariate data to properties of a Bob Ross painting (and the code is provided, so you can create them yourself, but of course you probably shouldn’t if you’re serious about understanding your data). But the paper also makes a more earnest point that it’s ironic how we can have visualization techniques like this and know they are bad, but not have enough theory to be able to say why.
I’ve never sent anything to an alt track because I’ve always had mixed thoughts on separating work that is deemed boundary pushing and controversial to its own track. It might be safer for some work where one expects knee jerk negative responses from traditionalists, but shouldn’t we tolerate some risk taking in the mainstream computer science lit? Part of our job is to explore what’s possible with technology. When I write something about visualization that I suspect might be a little controversial (usually because it’s critiquing some assumption or practice in mainstream or “core” vis, whatever that is), my goal is always to publish in the main conference track or journal, even if the some of the ideas seem extreme. As one example from earlier in my career, probabilistic animation as the default way to convey uncertainty was too much for some people, so it took a few tries to get it accepted at first, but sending it to an alt track to begin with would have felt like giving up. Why marginalize one’s own argument? But then again, I’ve never done “research” purely for the humor or stunt of it where I really didn’t want it to be taken seriously.
All this also makes me wonder if any other fields have room for deliberately wacky work. What does alt statistics look like?
What does alt statistics look like? I wonder the myself.
For a log time I wanted to use Stan for something unrelated to statistics and inference. Like maybe write a game is Stan? or an art project? But I never got any idea what would be “alt” enough
Maybe readers here have some ideas.
Stan is already “alt” statistics, in the sense that Bayes and model-building are outside the mainstream of statistics.
More seriously, I think of work like that of Danielle Navarro as being “alt” in the same sense as Correll’s work, in that she is (in my view, rightfully) challenging the increasingly pervasive idea that all scientific questions can be resolved using quantitative measures of model fit like Bayes factors or CV. She also does art in R.
Yep, Danielle came to mind after I saw Mikhail’s question too.
It is perhaps the nature of “alt” that it works its way into the mainstream. I think of a lot of conceptual statistical research as ways of bringing ideas inside the tent. For example, back in the 1950s-1970s people talked about “empirical Bayes,” which was a sort of outside-the-box way to get more out of Bayesian inference by estimating the prior from data. This was seen as a kind of cheat, but it worked. Then Tiao and others figured out hierarchical Bayes, which is a theoretically-valid way of doing this, and then this new theoretical perspective opened up a lot of great practical methods. Similarly, in the time of Tukey, “exploratory data analysis” was outside of statistics, it was a sort of renegade alt-statistics, but in recent decades we’ve been able to fold EDA into formal statistics by thinking of it as predictive model checking. Machine learning ideas were a sort of alt-statistics but now they’re just statistics. And one of our major goals in our workflow research is to do the same with workflow, to bring it inside the tent. See section 1.3 of our article for fuller discussion of this point.
Illicit prior elicitation isn’t a cheat, its the only sensible way forward. Empirical Bayes is alive and well.
“Illicit prior elicitation” sounds like cheating though.
Illicit conveys prohibited but the authorities, but not necessarily immoral.
oops, prohibited BY the authorities…
I think that Jessica’s definition here is exactly the opposite: “alt” is something cool, but not really useful. So if it made its way to mainstream, it was not “alt” enough.
I don’t want the lack of comments to be interpreted as a lack of interest in this. I love this, if only because of the idea of representing data using Bob Ross-style paintings. On that note, those data displays could be greatly improved: those don’t actually look like Bob Ross paintings at all, they’re far too stylized. Or are they? Perhaps the quantity of data can be displayed as yet another parameter: more data gets you a more detailed painting.
I like that last idea! The smaller the sample, the more your visualization looks like the blobby start of a Bob Ross painting.
What about Keane paintings? The most important variable in the dataset will be coded as the size of the eyes.
Lucio Fontana would be an easy one, where you can vary the number of slashes and their orientation and length. Its so abstract it might actually work, though I think size and orientation are not super perceptually separable.
I did have some failed attempts trying be artistic – plotting likelihood functions of different parameters for different models.
For instance this was an attempt to display “incorrect” likelihoods based on assuming the observations were observed to infinite decimal places versus only to a finite number of digits – https://statmodeling.stat.columbia.edu/wp-content/uploads/2010/12/TheTyranyof13.pdf
And this was an instance of displaying the effect of not including or including an interaction on the variance parameter for sex (given an interaction on the location parameter was fit) on the individual likelihood functions for the the variance parameters – see figure 3 here https://statmodeling.stat.columbia.edu/wp-content/uploads/2011/05/figure13.pdf
This lead to a series of poorly received posts that ended with this one https://statmodeling.stat.columbia.edu/wp-content/uploads/2011/05/plot13.pdf which I then spent a few years trying to get published.
What I learned was many were not that familiar likelihood function and or working with them directly rather than immediately extracting an maximum and the curvature there (MLE and SE) or immediately obtaining a sample from the posterior if one has a prior. But using creativity in an area that poorly misunderstood is definitely a bad idea.
This is topical because someone read figure13.pdf the other day and suggested it should be published, so thinking about how it needs to be re-written with better motivation.
You could call them deconstructed rainbows or alternatively, hairy spider graphs.
> But using creativity in an area that poorly misunderstood is definitely a bad idea.
Somehow I can relate to this professionally.
If you combine Chernoff faces, marginalized likelihoods, and Andrew’s kitty-cat obsession, you could have the single cutest Litter-Box-and-Whisker plot ever.
Jrc:
That does sound adorable! But, just to be clear: sharing the occasional cute photo is hardly an obsession!
Cheerfully withdrawn!
Perhaps “penchant”
Medical research has the BMJ Christmas edition: “The soul of the Christmas issue is originality. We don’t want to publish anything that resembles anything we’ve published before. While we welcome light-hearted fare and satire, we do not publish spoofs, hoaxes, or fabricated studies.” – https://www.bmj.com/about-bmj/resources-authors/article-types/christmas-issue
It has been the subject of some criticsm – https://www.smithsonianmag.com/science-nature/once-year-scientific-journals-try-be-funny-not-everyone-gets-joke-180961512/)