2022-06-28, 16:30–16:50, Main Hall
When long running jobs are too long running jobs, profilers help us understand where it is that our code spends its time. I present a technique for manually guided profiling for cases the automatic tools cant' help.
Automatic profiling is great. You just run your code, as you normally do, and get a nice graph of where your CPU spends its time while you're waiting for a job to finish.
Except, sometimes the automatic tools can't help. Maybe something in your work load doesn't agree with them. Maybe they make your already long running job run so much longer that it's impractical to run it properly. Maybe you're only interested in profiling a small part of your code, and profiling the whole thing would create too much noise to be useful.
In this lecture I'll go over what I did when faced with such a problem. I'll detail the technique I used to determine where the time is spent. This is a manually guided profiling, i.e. the programmer decides which areas to measure.
We'll also handle the more complicated cases. In particular: * Short functions that get called a lot. * Preventing double accounting when one measured function calls another measured function. * How to present your data when you need to "sell" the need to fix a problem.
Last, but not least, I'll present an easy way for you to incorporate this technique into your own Python code.