My regular readers (and perhaps the irregular ones as well know that I have been obsessed with baseline-responsiveness benchmarking of frameworks for years now. The idea has always been that, in order to know how far you can optimize your framework-based applications, you need to know the limits imposed by the framework itself. Only then can you have an idea of where to spend your limited resources on improvement. For example, if you need 200 dynamic requests/second, but the framework itself (with no application code in use) is capable only of 100, then you know that no amount of application or database optimization will help you — it’s time to start scaling, either horizontally or vertically.
To perform these benchmarks, I have only employed the
ab tool provided by the Apache web server. It was easy to use, and relatively easy to parse the output to automate reporting. However, it turns out that
ab over-reports responsiveness of Apache when serving static HTML files, and when serving minimal PHP scripts such as
<?php echo "hello world"; ?>. I discovered this just recently when attempting to find out why PHP appeared to be faster than HTML, and then only with the assistance of Paul Reinheimer, whom I now owe a bottle of vodka for his trouble.
It turns out that the
siege tool from JoeDog Software is more accurate in reporting static HTML and PHP responsiveness. This is confirmed through Paul Reinheimer as well, who reported the expected responsiveness on other systems.
The over-reporting from
ab means that all my previous reporting on benchmarks is skewed too low when comparing framework responsiveness to PHP’s maximum responsiveness. As such, I have re-run all the previously published benchmarks using
siege instead of
ab. Previous runs with
ab are here …
… and below are the updated
siege versions. As with previous attempts, these benchmarks are performed on an Amazon EC2 “small” instance. There is one difference to note: previous runs used Xcache for bytecode caching, but these use APC; I don’t suspect this change in caching engines has a significant effect, but I have not tested that assertion.
Note the baseline-html and baseline-php numbers. Using
ab previously, these were reported as 2100-2400 requests/second and 1100-1400 requests/second, respectively. The
siege tool reports a much lower number for both, but the dropoff between static HTML and dynamic PHP is much smaller; with
ab it looked like about 40-50%, but now with
siege it looks like only about 15-18%. This behavior is much more like what we would expect from a memory-based PHP script.
Note also the separate framework requests/second; they are very similar between
siege. This means that the framework responsiveness numbers are almost unchanged.
Because the nearly-identical framework numbers are compared to a much smaller baseline PHP number, the frameworks now appear to be doing much better in relation to PHP’s maximum responsiveness. For example, Solar-1.0.0alpha1 with
ab appeared to run at about 11% of PHP’s max, but with
siege it looks close to 17%. All of the frameworks tested see this kind of comparative gain in their reporting.
However, when compared to each other, the framework rankings are the same as before: Solar has the highest baseline responsiveness, followed by Cake and Zend (their respective releases are very close to each other in responsiveness), and Symfony trails with the lowest baseline responsiveness.
In summary, using
ab skewed the “percentage of PHP” comparisons because it over-reported PHP’s maximum responsiveness, but the framework requests/second numbers and the framework comparative rankings are unchanged from previous reporting. The Google project for the benchmarking system has been updated to use
siege, so all future reporting will reflect its results, not those of