Well, this blog post is only a few weeks late. Fortunately my excitement hasn’t waned: I’m psyched to write about a project I’ve effectively been working on since May of this year.

I’m working with John, Kaleigh, and David on a new testing framework for deep reinforcement learning. We presented a version of the work at NeurIPS this year, and are submitting it to a software engineering conference soon. It’s called ~~ Toybox ~~, it’s written in Rust, and there’s more cool information about it on my Projects page.

Context: My below the fold backstory

Alternative header: Yes, yes, the past two years have been a crapshoot for everyone, I’m not special.

2017 was a rough year for me. Hell, the past three years have all been rough in some capacity. However, in November 2016, it was like a switch flipped and all coins were suddenly weighted. At some point I may write more about specifics, but let’s just say that for a while starting that November, it felt like there was no choice I could make that wouldn’t lead to a dreadful outcome. I’m someone who values agency in my life and (apparently rather foolishly) believes that everyone is a rational actor. It didn’t help that the rest of the world also felt like things went upside-down.

As a consequence of some professional turmoil, my research progress slowed. Well…kind of. I certainly kept working at my usual pace, but my topic was a slog and the political situation around it was very stressful to deal with. I started running much more seriously and found it was pretty much the only way I could manage the near daily dose of what I now view as comically batshit shoes dropping.

I enjoyed TAing CS240 again in Spring 2017, and it was great relief to find fufillment from something other than my research topic. Given various resource constraints (time, money, compute, sanity), I didn’t have bandwidth to do much research that semester anyway. That summer, I did some project assistant work related to education and started picking up the pace, research-wise again. I then TAed the graduate Logic course in Fall of 2018 and had a bike accident during the 2018 Bike Fest in September. This was not great timing, given that running remained my prime source of stress relief. Nonetheless, I was able to submit a conference paper (it was rejected – this was the second time). During all of 2017, I worked quite a lot on my dissertation proposal and yes, Virginia, I do believe this was a painful waste of my time. Live and learn.

In Spring 2018, I co-taught 240. This is a required course for the undergraduate major. I have a lot to say about that experience, which I will save for another post, but the important part for this post is that prepping courses is super time consuming and teaching well is at least 70% social work. I think about 10% of my class was in some kind of crisis at any given time, which for a class of 180 students is the size of a full liberal arts college class. I have many thoughts regarding this, but for now – definitely no research happening!

Anyway, that brings me to May of this year…

Hello, XAI!

In Fall 2017, I began sitting in the Knowledge Discovery Lab, where a bunch of awesome people are working on explainable AI. As a lab, KDL specializes in causal inference, empirical methods, and relational learning. Folks in the lab have had a growing interest in collaboration possiblities with systems folks. One of their most recent projects has been to use systems data for evaluating causal inference metrics (see Amanda Gentzel’s work). They are also addressing some really interesting work in probabilistic programming languages – specifically pertaining to whether PPLscan express causal models that aren’t expressible as causal graphical models (see Sam Witty’s work).

As for where I fit into this, I’m bringing the systems brawn and knowledge of experimentation system minutae to the XAI project. We have a few more systems-y projects in the pipeline and I’m pretty pumped to write about about them soon!