Lies, Damn Lies, and PHP Benchmarks

I need to get something off my chest. First, I’d like you to examine the following code:

 
class Test
{
    public function output ()
    {
        return 'Hello, world!';
    }
}
 
function test ()
{
    return 'Hello, world!';
}
 
$start = microtime(true);
for ($i = 0; $i < 10000000; $i++) {
    test();
}
printf("%0.2f seconds for 10 million function calls.\n", microtime(true)-$start);
 
$start = microtime(true);
for ($i = 0; $i < 10000000; $i++) {
    $c = new Test();
    $c->output();
}
printf("%0.2f seconds for 10 million class calls.\n", microtime(true)-$start);
 
// Outputs about 4.9 seconds for the first and 7.9 for the second on my system.

How many times have you seen a benchmark like this passed off as proof of some facet of PHP’s behavior, particularly relating to classes versus functions? Next, ask yourself: What does this “benchmark” prove?

The answer: Absolutely nothing. Most developers are already aware (or should be) that instantiation of a class invokes a fair amount of overhead not present with functions in PHP. Yet time and again, whenever I encounter a discussion relating to PHP best practices or performance, I find comments that allude to any of the following:

  • PHP is not Java!
  • ${framework} uses classes, therefore ${framework} is slow!
  • PHP is a templating language. Replicating OOP concepts in PHP is overkill/stupid/slow.

Usually, but not always, one (or more) of the above opinions is presented with an allusion to benchmarks not all that dissimilar from the horrible, horrible sample of code at the beginning of this post. Why?

I don’t have a good answer, but I suspect it’s because PHP is one of the most oft-benchmarked scripting languages of our time, probably because the barrier to entry is so low. The worst part? No one knows how to benchmark. Granted, I’m guilty of the same charge, and I’ve published awful benchmarks in the past somehow “proving” (for some value of proof) that a specific feature is slower than another. So, as penance for my own wrongdoings, I want to make a point, and I want to make it as clear as I possibly can.

Your PHP benchmarks are wrong.

Don’t feel bad, though. Nearly all of them are wrong, including most hardware benchmarks. The reason for this is benchmarks, by their nature, are synthetic, and the nature of a synthetic benchmark is such that it fails to capture real world behavior. If you want to create a realistic benchmark, you’re going to have to put an awful lot of work into emulating a full stack, and the only way to do this correctly is to effectively write a small application. You can’t simply toss a few function calls into a for loop and call it a day. But the problem with this approach is that once you’ve implemented a demonstration application, you’re no longer benchmarking the language–you’re benchmarking your library or application. The only way you can really benchmark a language is to do so in a manner that can be replicated across multiple platforms in a manner that each are roughly equivalent and the benchmark captures the relative performance of each.

If you don’t buy that argument, I’d suggest you look at the TechEmpower framework benchmarks. They’ve put a lot of work into creating a fair, realistic collection of benchmarks for dozens and dozens of frameworks for each of the popular languages. If you’re not willing to invest a similar magnitude of effort into your own benchmarks, you’re not going to prove anything. You won’t prove that a specific language feature is too slow to use, you won’t prove that functions are better than classes, and all you’re going to accomplish is wasting your time–and worse–your readers’ time.

I’ll explain further, but before I delve into greater detail about why looping over a function ten million times proves absolutely nothing, I want to demonstrate by example. Nothing beats a good illustration to broaden one’s horizons, so I’ve fabricated a slightly less terrible benchmark than the code snippet that began this post.

A Slightly Less Terrible Benchmark

This benchmark is intended to illustrate two objectives: 1) That PHP language features don’t differ by much in terms of performance relative to each other and 2) that simple benchmarks are effectively pointless and only prove what the author intends for them to prove. The benchmarks are broken down into six files:

  • out-class-inheritance.php – Uses class inheritance to output a short string.
  • out-class-interface-inheritance.php – Uses an interface to define class methods and inheritance to output a short string.
  • out-class-interface.php – Uses an interface to define class methods to output a short string.
  • out-class.php – Use a simple class to output a short string.
  • out-class-static.php – Uses a static method of a class to output a short string.
  • out-function.php – Uses a single function to output a short string.

Each benchmark was run three times using Apache Bench (ab) on Arch Linux with a concurrency of 10 across 10000 runs. In keeping with the tradition of poorly designed benchmarks, ab was run on the same system as the web server. It’s a terrible idea, I know, but we’re only measuring approximate performance of individual language features with the intent to prove that there isn’t a substantial difference.

Note that I won’t include the sources to this benchmark here. If you want to examine them, please review this Gist. They’re about as simple as the code fragment at the top of this post and only slightly less stupid.

Performance

Unsurprisingly, for all intents and purposes, each test performed approximately the same. I noticed that tabbing between the shell and my browser introduced sharp drops in performance, so a handful of these had to be re-run from the start. So, I suspect that the reductions in performance as seen for the inheritance and interface tests were likely introduced by system variability. Yes, inheritance should be slightly slower (more overhead) but it’s not substantially slower:

  • out-class-inheritance.php – 15407 req/s, 0.65s total
  • out-class-interface-inheritance.php – 14854 req/s, 0.674s total
  • out-class-interface.php – 14568 req/s, 0.688s total
  • out-class.php – 15367 req/s, 0.651s total
  • out-class-static.php – 15633 req/s, 0.64s total
  • out-function.php – 15146 req/s, 0.66s total

Are these results surprising? They shouldn’t be. This is a benchmark and benchmarks lie. My benchmark lies because it isn’t illustrative of real world use, and as such, these numbers mean absolutely nothing. Well, okay, my results are suggestive that if all things are equal, OOP versus functional design is a meaningless argument. But these results don’t matter because no one in their right mind is going to have an application that consists of a small single class or function. Likewise, no one’s going to run the same function or instantiate a class a few million times in their app and call it good. At least, I’d hope not.

What this does illustrate is that a benchmark can be manipulated to produce desired results, and they can be interpreted with greater variability than most religious texts. While I’d like to believe this benchmark demonstrates that there’s little difference between PHP language features, it isn’t exhaustive enough to measure the impact of specific design decisions. Nor should it, because PHP doesn’t exist in a vacuum and naive designs fail to account for realistic implementations.

The problem, then, is that benchmarks running a small sample of code a million times fail to account for the broader design of a full application, which might be comprised of a few dozen functions, classes, hundreds of queries and so forth. No one class is going to be run a million times for every hit (or it shouldn’t be), and neither will any one function. More importantly, as my benchmarks demonstrate, the real bottleneck is going to be network I/O, followed by disk, and with rare exceptions (and a distant third), the CPU. For dissenters, I would wager that you’re going to encounter network limitations well before any hypothetical “classes are slower than functions” condition is met. Moreover, because the PHP VM is rather slow, it won’t matter a great deal how your application is structured anyway. An application written in Go or C++ is going to perform several orders of magnitude faster than you PHP app. For that matter, an application written using Python, Gevent, and Gunicorn would also render your pet PHP constructs a rather embarrassingly distant last place. Save for frameworks written in C (Phalcon) or new clean-room PHP implementations (like HHVM), the entire debate over classes versus functions is a rather silly one, isn’t it?

Benchmarks are Relative

I can’t emphasize how fascinating and well designed the TechEmpower benchmarks are for illustrative relative performance among different languages and frameworks. That said, I must emphasize again that nothing exists in a vacuum. PHP classes or functions aren’t solitary constructs. They’re running in unison with a web server, a database engine of some sort, and possibly dozens of other platforms, each with a certain amount of overhead or latency, and even network topography can impact service behaviors in suboptimal ways.

What burns me the most is that there is this small population of noisy magpie-like developers pushing against efforts to improve PHP development (like the improvements in the PSR series) toward sensible standards. While this population is diminishing, I still see comments from time to time on sites like Hacker News that effectively blame OOP design for PHP’s comparatively poor performance on the web framework benchmark, particularly for frameworks like Symfony. Yet Facebook’s HHVM and, to a lesser extent, the Phalcon PHP framework have demonstrated that this need not be the case. Why is PHP plagued with this sort of nonsense? I have no idea.

Consider for a moment the benefits gleaned from the PSR standards like PSR-0 autoloading and the fairly recent explosion in the number of libraries that, thanks to Composer and Packagist, can be included in any project with little effort. Furthermore, clean design and implementation induces new momentum in the PHP community that will hopefully render spaghetti code like that in projects using archaic practices (vBulletin and IP.Board, I’m looking at you…) a forgotten memory. OOP isn’t a panacea, but when used correctly, it encourages code reuse and generally reduces the time spent on implementation. It’s disappointing that there are musings in the community about whether or not OOP practices have a place in PHP. If you don’t think OOP belongs in PHP, then I suspect you’ve missed that debate by about a decade and a half. Better late than never, right?

Fortunately, as Composer and Packagist have illustrated, such opposition to modern PHP design is constrained to a tiny and ever-dwindling population of the PHP community (I’d actually wager it’s mostly people have have merely dabbled in PHP and don’t currently write much code in it, if any). PHP has its warts, but modern PHP largely abolishes or mitigates some of the worst parts of the language. That isn’t to say PHP is a beautiful language (it’s not), but it doesn’t have to be beautiful to be useful. Perl taught us that lesson years ago.

I think I see an older gentleman in the back with a long gray beard laughing. See? He gets it.

No comments.
***

Brief Comparison of Servers and Frameworks

This article has been deprecated as the fundamental principles behind the data acquisition are flawed and incorrect (at the time, I didn’t have multiple machines to test on; as a consequence, benchmarks were performed on the same server by the same server). You should not cite this article as authoritative, but it can give you some additional foundations to base further research on.

I’ve been working with a couple of Python-based frameworks for web applications recently and have elected to try my hand at writing my own application server in Stackless Python due to its simplified concurrency model (and threading in Python appears to have performance-related issues compared to languages that support it natively, like Java). My efforts haven’t gotten much further than creating a basic skeleton so far, and it certainly seems as though I’ve learned much more about Python’s internals than I ever wanted to know! Performance has been a fairly significant concern up front, and I believe it’s necessary to examine how your own software performs early in the development process rather than waiting until the design has been solidified. Once you’ve been bitten by a fundamental flaw or serious bottleneck around which your entire design relies heavily on, it’s very difficult to alter the application’s behavior without significant refactoring. While I’ve heard it said that it’s sometimes better to write the application first and worry about performance later, I have never had such luck. It seems easier to write the application correctly the first time, in terms of developer time spent now and in the future on maintenance.

However, there is one benefit I didn’t predict from this exercise: This experience has also afforded me an opportunity to compare different frameworks and servers for performance and behavior under load. Read more…

2 comments.
***