David Abel

Portrait


NeurIPS 2017

12/11/2017

I just returned from a wonderful (albeit a bit chaotic) trip to the 31st NeurIPS down in Long Beach, CA. As many folks have mentioned, the conference has continued to grow to an unprecedented scope, this year reaching 8,000+ people.

I took some notes on the talks I attended. These include statistics about the conference and publication/review process (see page 8-9) as well as more detailed description of the talks I went to.

Highlights

In addition to these notes, my five highlights from the conference are:

  1. John Platt's talk on energy, fusion, and the next 100 years of human civilization. Definitely worth a watch! He does a great job of framing the problem and building up to his research groups focus. I'm still optimistic that renewables can provide more than the proposed 40% of energy for the world.

  2. Kate Crawford's talk on bias in ML. As with Joelle Pineau's talk on reproducibility, Kate's talk comes at an excellent time to get folks in the community to think deeply about these issues as we build the next generation of tools and systems.

  3. Joelle Pineau's talk (no public link yet available) on reproducibility during the Deep RL Symposium.

  4. Ali Rahimi's test of time talk that caused a lot of buzz around the conference (the ``alchemy" piece beings at the 11minute mark). My takeaway is that Ali is calling for more rigor from our experimentation, methods, and evaluation (and not necessarily just more theory). In light of the findings presented in Joelle's talk, I feel compelled to agree with Ali (at least for Deep RL, where experimental methods are still in the process of being defined). In particular I think with RL we should open up to other kinds of experimental analysis beyond just ``which algorithm got the most reward on task X", and consider other diagnostic tools to understand our algorithms: when did it converge? how suboptimal is the converged policy? how well did it explore the space? how often did an algorithm find a really bad policy? why? where does it fail and why?. Ali and Ben just posted a follow up to their talk that's worth a read.

  5. The Hierarchical RL workshop! This event was a blast. In part because I love this area and find there to be so many open foundational questions, but also because the speaker lineup and poster collection was fantastic. When videos become available I'll post links to some of my highlights, including the panel (see the end of my linked notes above for a rough transcript of the panel).

Misc. Thoughts

And a few other miscellaneous thoughts:

  • My impression is that AI safety research is starting to mix with the broader machine learning community, which I think is a really good thing. Ideological mixing of this sort will lead to more discussion and collaboration, which benefits everyone. Researchers from all walks of the ML community have cared about some of the same problems the AI safety literature targets for awhile, but perhaps had different motivations, methods, and even language for describing these problems. So, bridging this gap can only lead to improve critiques, analysis, and better overall science (even if the assumptions underlying some of the research are different).

  • Loads of companies were at NeurIPS. It's great to see so much mixing between industry and research.

  • In Kate Crawford's talk she mentioned that the Trump administration asked the ML community to make a system to help with border control (screening folks at the border based on inferred potential for criminical activity). She gave a call to us in the ML community to make sure we're thinking deeply about what systems we make and how they effect the world. I wholeheartedly second this call.

Cheers,
-Dave