2021-05-02, 11:30–11:55, PyData Track 2
In online advertising, we run a lot of online tests to determine which approach boosts our engagement the most. We talk about different ways of online testing through the perspective of a new feature we developed that is based on continuous testing.
Testing different UI components, algorithms, optimization approaches in an effort to boost engagement is becoming more and more prominent in online applications. In this talk we will introduce a feature that is based on continuous online testing, Then we will go over online testing in general and some methods that we can use based on certain constraints of the domain. For instance having a limited time to decide which test group we want to use, to avoid having the test itself affecting the results. Sometimes we are also constrained by deadlines by which we have to conclude testing. In tests like those, we have to balance exploration and exploitation to maximize the test’s payout while still being certain in what we did. With that in mind will explore different methods of running online tests, namely split tests, epsilon-greedy multi-armed bandits, and Thompson sampling. We will go over their pros and cons, and applications. After a short demonstration written in python. We will conclude the talk with reasoning of why we chose the methods that we did.