Skip to content
 

JuliaCon 2015 (24–27 June, Boston-ish)

JuliaCon is coming to Cambridge, MA the geek capital of the East Coast: 24–27 June. Here’s the conference site with program.

I (Bob) will be giving a 10 minute “lightning talk” on Stan.jl, the Julia interface to Stan (built by Rob J. Goedman — I’m just pinch hitting because Rob couldn’t make it).

The uptake of Julia has been nothing short of spectacular. I’m really looking forward to learning more about it.

Trivia tidbit: Julia and Stan go way back; they were both developed under the same U.S. Department of Energy grant for high-performance computing (DE-SC0002099).

6 Comments

  1. Rahul says:

    I’m curious, are there people on here who use Julia in their work? Was curious to hear about what motivated them to switch.

    • Phillip says:

      I’ve started tinkering with MixedModels.jl as the successor to R’s lme4. Partly because of the way Julia exposes more “low-level” operations and its tight integration with LLVM, it can be substantially faster for some operations. In the case of mixed models, it’s apparently much easier to implement the algorithms as in-place operations in Julia than in R, which leads to big performance improvements — memory allocation also has a time cost, beyond the usual bit about running out of RAM. (Models that take hours of compute time and gigabytes of RAM in lme4 take tens of seconds and less than a gigabyte of RAM in MixedModels.jl) Part of Julia’s philosophy is to make sure that you can do more things directly in Julia (e.g. direct access to BLAS, etc.) instead of depending on C/C++ extensions like in Python (NumPy) and R (RcppEigen and the packages based upon it).

      The downside is that the target audience is “technical computing” and not just “statistical computing”, so you lose some of R’s convenience for typical statistical manipulations, and the general ecosystem is not very mature (number of packages, package capabilities, etc.). Thus far, I’ve had to do more scalar operations (or rewriting vector operations as comprehensions), which seems to allow for better optimization and makes in-place algorithms easier, but it makes converting an array of type X to type Y slightly more tedious. (If there is a more efficient way to do this that I’ve been missing out on, somebody please feel free to correct me!)

      • Rahul says:

        Very interesting. Thanks for articulating your experiences.

        Somehow, before you wrote this, I never thought of Julia as competing with R. I was thinking of it as getting people to switch from Python. In the sense that both Julia and Python seem general purpose whereas R is distinctly “statistical”.

        Your point about the “general ecosystem is not very mature” is a very valid one and that’s what sets a very high bar for any new language. It has not only got to be faster / easier etc. but it has got to be so much better at whatever it does that it justifies the effort of switching and the sacrifice of living in a less dynamic and less instrumented ecosystem. I’d be very impressed if Julia actually manages to achieve this.

        The Julia benchmarks seem impressively faster than native python but I am wondering how it performs against “optimized” python. i.e. compiled, using Numba etc.

    • hjk says:

      I know a lot of people tempted to switch but who are waiting for more libraries etc. The basic motivation (whether Julia lives up to it or not) is speed of C/C++, ease of Python. I’ve started to tinker a bit for these very reasons, tho numpy and other libraries make it almost possible to use Python for most things. Basic stats is still better in R tho Python isn’t bad. I’ve heard Julia interfaces with C/C++ even better than Python when you need to but haven’t looked in detail. Another competitor is probably Scala. I wish we could all just use lisp and be done with it.

  2. Phillip says:

    I’ve started tinkering with MixedModels.jl as the successor to R’s lme4. Partly because of the way Julia exposes more “low-level” operations and its tight integration with LLVM, it can be substantially faster for some operations. In the case of mixed models, it’s apparently much easier to implement the algorithms as in-place operations in Julia than in R, which leads to big performance improvements — memory allocation also has a time cost, beyond the usual bit about running out of RAM. (Models that take hours of compute time and gigabytes of RAM in lme4 take tens of seconds and less than a gigabyte of RAM in MixedModels.jl) Part of Julia’s philosophy is to make sure that you can do more things directly in Julia (e.g. direct access to BLAS, etc.) instead of depending on C/C++ extensions like in Python (NumPy) and R (RcppEigen and the packages based upon it).

    The downside is that the target audience is “technical computing” and not just “statistical computing”, so you lose some of R’s convenience for typical statistical manipulations, and the general ecosystem is not very mature (number of packages, package capabilities, etc.). Thus far, I’ve had to do more scalar operations (or rewriting vector operations as comprehensions), which seems to allow for better optimization and makes in-place algorithms easier, but it makes converting an array of type X to type Y slightly more tedious. (If there is a more efficient way to do this that I’ve been missing out on, somebody please feel free to correct me!)

  3. Bill Harris says:

    Well, there used to be XLISP-STAT.

Leave a Reply