hinkley 10 hours ago

Let me save you fifteen minutes, or the rest of your life: They aren’t.

Profilers alter the behavior of the system. Nothing has high enough clock resolution or fidelity to make them accurate. Intel tried to solve this by building profiling into the processor, and that only helped slightly.

Big swaths of my career, and the resulting wins, started with the question,

“What if the profiler is wrong?”

One of the first things I noticed is that no profilers make a big deal out of invocation count, which is a huge source of information for continuing past tall tent poles or hotspots into productive improvement. I have seen one exception to this, but that tool became defunct sometime around 2005 and nobody has copied them since.

Because of cpu caches and branch prediction and amortized activities in languages or libraries (memory defrag, GC, flushing), many things get tagged by the profiler as expensive that are being scapegoated because they get stuck paying someone else’s bill. They exist at the threshold where actions can no longer be deferred and have to be paid for now.

So what you’re really looking for in the tools is everything that looks weird. And that often involves ignoring the fancy visualization and staring at the numbers. Which are wrong. “Reading the tea leaves” as they say.

  • SerCe 9 hours ago

    > Let me save you fifteen minutes, or the rest of your life: They aren’t.

    Knowing that all profilers aren't perfectly accurate isn't a very useful piece of information. However, knowing which types of profilers are inaccurate and in which cases is indeed very useful information, and this is exactly what this article is about. Well worth 15 minutes.

    > And that often involves ignoring the fancy visualization and staring at the numbers.

    Visualisations are incredibly important. I've debugged a large number [1] of performance issues and production incidents highlighted by the async profiler producing Brendan Gregg's flame graphs [2]. Sure, things could be presented as numbers, but what I really care about most of the time when I take a CPU profile from a production instance is – what part of the system was taking most of the CPU cycles.

    [1]: https://x.com/SerCeMan/status/1305783089608548354

    [2]: https://www.brendangregg.com/flamegraphs.html

    • hinkley an hour ago

      Isn’t not that they’re “not perfectly accurate”, it’s that you can find half an order of magnitude of performance after the profiler tells you everything is fine.

      That’s perfectly inaccurate.

      Most of the people who seem to know how to actually tune code are in gaming, and in engine design in particular. And the fact that they don’t spend all day every day telling us how silly the rest of us are is either a testament to politeness or a shame. I can’t decide which.

  • pjc50 7 hours ago

    > no profilers make a big deal out of invocation count

    This is where we get into sampling vs. tracing profilers. Tracing is even more disruptive to the runtime, but gives you more useful information. It can point you at places where your O-notation is not what you expected it to be. This is a common cause of things which grid to a halt after great performance on small examples.

    It gets even worse in distributed systems, which is partly why microservice-oriented things "scale" at the expense of a lot more hardware than you'd expect.

    It's definitely a specialized discipline, whole-system optimization, and I wish I got to do it more often.

    • hinkley an hour ago

      It’s also a big factor of “the fastest code is deleted code”. Someone sneaks in a loop or a second call to a function and disrupts the code flow and you can see some really weird shit.

      One of my first big counter examples to standard “low hanging fruit” philosophy was a profiler telling me a function was called 2x as often as the sequence diagram implied and occupying 10% of the overall cpu time for the operation. So I removed half of the calls by inverting a couple of calls to expose the data, which should have been a 5% gain (0.1 / 2). Total time reduction: more than 20%. Couple jobs later I managed a 10x improvement on one page transition from a similar change with an intersection test between two lists. Part of that was algorithmic but most was memory pressure.

      Remember if microbenchmarks lie, a profiler is just an inversion of the benchmark idea. Two sides of the same coin.

  • geokon 7 hours ago

    im pretty sure performance counters count accurately. theyre a bit finnicky to use but they dont alter cpu execution.

    last i had to deal with it.. which was eons ago.. Higher end CPUs like Xeons had more counters and more useful ones

    im sure there are plenty of situations where theyre insufficient, but its absurd to paint the situation as completely always hopeless

    • hinkley an hour ago

      They don’t count cache eviction, memory pressure, or GC time being attributed to consumers of data instead of producers. Or sometimes consistently to barely involved function calls. When a task has a certain shape and occupies more than half of the heap size in the middle, it’s often the case that the cleanup code or teardown code gets blamed for the cost. The only real performance improvement available is to go stop the mess at the source.

    • mrjay42 6 hours ago

      Last time I checked, Intel's MSRs (https://en.wikipedia.org/wiki/Model-specific_register) allow Intel PCM (https://github.com/intel/pcm) to work, are indeed used to profile, or "measure performance" (sorry if my vocabulary is not the most accurate). Last time I checked the code of Intel PCM, it still relies on hardcoded values for each CPU which are as close as possible to reality but are still an estimation.

      It doesn't mean that you get wrong measurements, it means there's a level of inaccuracy that has to be accepted.

      BTW, I am aware that Intel PCM is not a profiler, and more of a measurement tool, however you CAN you use it to 'profile' your program and see how it behaves in terms of computing and memory utilization (with deep analysis of cache behavior (cache hit, cache miss, etc.))

  • jstanley 5 hours ago

    If you think it's difficult to optimise performance with the numbers the profiler gives you, try doing it without them!

  • whatever1 7 hours ago

    Heisenberg principle but for programming

comex 14 hours ago

Another option is to use the "processor trace" functionality available in Intel and Apple CPUs. This can give you a history of every single instruction executed and timing information every few instructions, with very little observer effect. Probably way more accurate than the approach in the paper, though you need the right kind of CPU and you have to deal with a huge amount of data being collected.

  • hinkley 9 hours ago

    Those definitely make them less wrong, but still leave you hanging because most functions have side effects and those are exceedingly difficult to trace.

    The function that triggers GC is typically not the function that made the mess.

    The function that stalls on L2 cache misses often did not cause the miss.

    Just using the profiler can easily leave 2-3x performance on the table, and in some cases 4x. And in a world where autoscaling exists and computers run in batteries that’s a substantial delta.

    And the fact is that with few exceptions nobody after 2008 really knows me as the optimization guy, because I don’t glorify it. I’m the super clean code guy. If you want fast gibberish, one of those guys can come after me for another 2x if you or I don’t shoo them away. Now you’re creeping into order of magnitude territory. And all after the profiler stopped feeding you easy answers.

  • scottgg 13 hours ago

    Do you have a source for “with very little observer effect”? I don’t know better, it just seems like a big assumption the CPU can emit all this extra stuff without behaving differently.

    • PennRobotics 4 hours ago

      Trace data are sent through a large/fast port (PCIe or 60-pin connector) and captured using fast dedicated hardware at something like 10 GB per second. The trace data are usually compressed and often only need to indicate whether a branch is taken or not taken (TNT packets from x86, Arm has ETM but similar enough trace path) with a little bit of timing, exception/interrupt, and address overhead. The bottleneck is streaming and storing trace data from a hardware debugger (since its internal buffer is usually under half a second at max throughput) although you can further filter by application on Intel processors via CR3 matching. (Regarding the last five years of Apple: I'm not sure you'll find any info on Apple's debuggers and modifications to the Arm architecture. Ever.)

      If you encounter a slowdown using RTIT or IPT (the old and new names for hardware trace) it's usually a single-digit percentage. (The sources here are Intel's vague documentation claims plus anecdotes; Magic Trace, Hagen Paul Pfeifer, Andi Kleen, Prelude Research.)

      Decoding happens later and is significantly slower, and this is where the article's focus, JIT compilation, might be problematic using hardware trace (as instruction data might change/disappear, plus mapping machine code output to each Java instruction can be tricky).

    • achierius 13 hours ago

      It's not an assumption, this is based on claims made by CPU manufactures. It's possible to get it down to within 1-2% overhead.

      Intuitively this works because the hardware can just spend some extra area to stream the info off on the side of the datapath -- it doesn't need to be in the critical path.

satisfice 13 hours ago

In the early nineties I was test manager of the Borland Profiler. I didn’t supervise the tester of the profiler closely enough, and discovered only when customers complained that the profiler results were off by a quarter second on every single measurement reported.

It turns out that the tester had not been looking closely at the output, other than to verify that output consisted of numbers. He didn’t have any ideas about how to test it, so he opted for mere aesthetics.

This is one of many incidents that convinced me to look closely and carefully at the work of testers I depend upon. Testing is so easy to fake.

  • dboreham 13 hours ago

    In my experience a very large proportion of all automated testing is like this if you go poking into what it does.

    • satisfice 11 hours ago

      My experience is the same.