NeurIPS 2020

Neha Pawar
4 min readDec 9, 2020

NeurIPS has begun and the AI community is getting better at conducting online conferences. Although, the virtual meetups can’t replace the feeling of in-person meeting, Gather.Town does make them entertaining. The excitement in discussing “how jet lagged one is” when arriving on a different continent is replaced by “what time of the day is it there?”. To summarize, this app simulates the meetups by giving the attendee an avatar that’s used to navigate in a simulated environment.

Gather Town, WiML Workshop 2020

NeurIPS 2020 hosts so many tutorials and presentations that one can easily be overwhelmed by the sheer amount of choice. Even so, the offline recorded videos make it easier to catch up on parallel events. Moreover, the videos are presented from a variety of locations, which gives me a feeling of belonging to a truly global community.

Some of the Tutorials I attended were:

  • “Deep Conversational AI” by Pascale Fung and team. It gave a holistic view of where we stand and where we are heading with Conversational AI’s.
  • “Where Neuroscience meets AI” by Jane Wang and team from DeepMind was a talk I was looking forward to since I read the schedule. The presenters succeeded in pointing out the common concepts in both fields while making me curious about the Conference on Cognitive Computational Neuroscience.

This year a heavy focus lies on Bias, Diversity and Inclusivity. In a race to create bigger models and to achieve higher scores (probably on biased datasets) it is time to reflect as a society about priorities and what’s really needed to solve the real-world problems. How many times have we as Data scientists said, “The problem is the data, if my Model is xyz (insert adjective), it’s because it learns from the data that is xyz.”. It’s easy to blame the data than to adjust the model to deal with such data. This was so gracefully addressed in the Talk by Charles Isbell. He addressed the paper that shook the research community earlier this year: PULSE built on top of a face generation system called StyleGAN. The idea behind PULSE is to generate high resolution pictures from the low-resolution ones. What was shocking was when the low-res images of people of color were upsampled, they were converted into images of white people that look very different as compared to the original picture. More here.

This is just one of the instances where bias in Machine learning (ML) models affects the society. Some other instances are when the ML models are being used in criminal justice systems, health care systems and credit accessibility systems. As Charles Isbell aptly said, if we are to be a Profession, we need to think seriously the consequences our models lead to.

Image from Keynote: You Can’t Escape Hyperparameters and Latent Variables: Machine Learning as a Software Engineering Enterprise by Charles Isbell, NeurIPS 2020

The takeaway from this talk is that the path to make an ethical AI is difficult, but we can do more to ensure it. We can always try certain techniques as highlighted by Charles in the below slide:

Image from Keynote: You Can’t Escape Hyperparameters and Latent Variables: Machine Learning as a Software Engineering Enterprise by Charles Isbell, NeurIPS 2020

Also, it wasn’t missed that all the keynotes so far had Sign Language Interpreters. Kudos to the team for organizing this.

On the same lines, NeurIPS has hosted a number of affinity workshops this year. The Idea behind these Workshops is to give the minority groups a louder voice and visibility, an exclusive platform to share their research. This was a place of meeting for the people who are going through the same circumstances because the world has consciously or unconsciously put them in the same bucket. Sadly, these buckets still exist and till they exist so will the need for such affinity Workshops. The affinity workshops were: Black in AI, Women in ML, Muslims in ML, Indigenous in AI, Queer in AI, Latinx in AI and New in ML.

I was personally impressed by Anima Anandkumar’s Discussion at one of the affinity groups. She shared her opinions on the current situation of a leading researcher, Timnit Gebru who was recently fired from Google. This matter is well summarized here. Apart from this, Anima also answered questions about Toxicity in the Work environment which is disproportionately faced by Women. These conversations though uncomfortable are important to have. Without them the glass ceiling will continue to exist.

These are just some from the myriad highlights from NeurIPS 2020 so far. The conference is still in progress and in the next few days, I am looking forward to the rest of the keynotes and research presentations. In the meantime, I am also reading one of the Award winning Papers: Language Models are Few-Shot Learners.

--

--