Learn Stan, get a job doing Bayesian modeling. How cool is that??

Tom Vladeck writes:

About a year ago, you posted our want-ad on your blog. Because of that, we hired at least one (and maybe two, but my memory of where one of the candidates came from is a bit hazy) really great contractors — both of whom are on the Stan development team: Adam Haber and Rok Cesnovar.

We’re back in the market, this time for a full-time hire whose responsibilities will be less to build the model than to run it — e.g. set the priors for each customer, work with the draws, generate forecasts, etc.

That’s great! Here’s the ad:

We’re hiring a data scientist with experience in bayesian inference to help build Recast, a company to build a next-gen media mix model. This person will work directly with clients to understand their marketing activity, customize our core model for each client, and to develop forecasts and scenario analyses.

The market is huge and for such a small company we’re getting substantial inbound interest. We’ve also got a few people on the Stan development team helping us build the model. We’re out of the heavy R&D stage and are now actively growing the company. Early paying customers include Away, Harry’s, Mockingbird, and others. Roughly $500 million dollars of annual media spend is already modeled on our platform.

This is an opportunity to help build a big business and work at the cutting edge of causal inference, statistical computing, and marketing data science.

Here’s the JD. You can apply by emailing me directly at [email protected]. Let me know why this is interesting to you and drop in any relevant github / linkedin / other links.

It’s great to see our work being useful to people out there in the world. That’s what free software is all about.

3 thoughts on “Learn Stan, get a job doing Bayesian modeling. How cool is that??

  1. Too bad media mix modelling is a scam designed to part advertisers from their money. It’s just linear regression with some transformations for carryover effects and diminishing returns—this will obviously give biased estimates. Bayesian estimation doesn’t help with that. They mention using lift tests to validate the MMM, but are also suspiciously negative, mentioning “costly lift-tests” and that running “lift tests [during the holiday season] could put your plan at risk.”

    Writing about online experiments, Lewis and Rao say:
    “The estimation challenge facing the advertiser is to detect this $0.35 difference in
    sales between the treatment and control groups amid the noise of a $75 standard deviation in sales. […] Perhaps surprisingly, even a very successful campaign has a
    minuscule R^2 of 0.0000054. […] The tiny R^2 for the treatment variable not only reveals the unfavorable power of RCTs but has serious implications for observational studies, such as regression with controls[…] An omitted variable, misspecified functional form, or slight amount of intertemporal correlation between ad exposure (web browsing) and shopping generating R^2 on the order of 0.0001 is a full order of magnitude larger than the true treatment effect—meaning a very small amount of endogeneity would severely bias estimates of advertising effectiveness.”

    https://academic.oup.com/qje/article-abstract/130/4/1941/1914592

  2. Tom, cofounder of Recast here. We agree that historically MMMs have been snake-oil that go largely unvalidated. We believe that we’ve made significant improvements over what those traditional vendors offered by doing a few things:

    – Applying some of the latest research in causal inference so that our target estimand is the interventional, not observational distribution.

    – Utilizing daily time series data to give us more data with which to estimate the parameters

    – Using a fully generative model. We do model time-shift and saturation effects, but instead of simply applying transformations ex ante we model these effects. We also use gaussian process priors to allow certain parameters to change over time

    – Since our estimates are meant to be causal, we encouraging our clients to validate the model through randomized experiments and geographic holdout tests

    In general, we think skepticism towards these type of products is very warranted. However, we think we’ve made some real innovations that make our product much better than anything that has come before and also better than the alternative (not doing any measurement at all).

Leave a Reply

Your email address will not be published. Required fields are marked *