Pycon Israel 2021

Opening the black box – an interpretable neural network architecture
05-03, 14:30–14:55 (Asia/Jerusalem), PyData Track 1

Neural networks don’t have to be black boxes, if you use creative designs and match the architecture to your specific needs, you can create a network as interpretable as linear regression, but without its linear constraints.


Many researchers use fully connected neural networks as a simple go-to model, without trying to match the architecture to the problem at hand. However, thanks to high-level open-source libraries such as pytorch, anyone can construct their own neural network architecture, to fit the requirements of a specific dataset. By creating a logical architecture, which models the generation process of our data, we achieve two goals:
1. Better accuracy on both train and test – since the model generalizes better.
2. Interpretability – we can assign coefficients to different parts of the model, in a similar way to linear regression models, while allowing great flexibility in the actual model.
Interpretability is important as it can help us understand the limitations and failings of our model, and engineer a better model, or collect more features, to improve on these areas.
We will examine a few examples of the limitations of simple fully connected neural networks, as well as other ML algorithms, and see how we can overcome these using architecture concepts anyone can implement in a few minutes using pytorch.


Session language

English

Target audience

Data Scientists

A senior data scientist with interest in Bayesian methods, novel NN architectures and run-time optimization tricks in python. Specializing in time series forecasting particularly in the field of supply chain forecasting.

This speaker also appears in: