Fabien Potencier released Symfony 2.0.0alpha1 last week, along with some benchmarks showing its performance. I am glad to see that Fabien used my benchmarking system and methodology, and am happy to see that he is paying attention to the performance of his framework. I take this as an acceptance on his part that my methodology is legitimate and valid, and that it has value when comparing framework responsiveness.
However, in attempting to reproduce the published Symfony 2 benchmarking results, I found Fabien’s reporting to be inaccurate (or at least incomplete). Read on for a very, very long post detailing my attempt to replicate his results for the “hello world” basic framework overhead comparison, and my conclusions.
For the impatient, here are my conclusions in advance:
Fabien’s benchmark report, as shown at http://symfony-reloaded.org/fast, is inaccurate for the setup he describes. Lithium and Flow3 do not work in Fabien’s benchmark codebase at Github. Also, Symfony 2 is faster than Solar beta 3 by 5%, not 20%, on a “c1.xlarge” instance; to get a relative difference like Fabien describes, one has to use an “m1.large” instance. (It is entirely possible that the process Fabien used for benchmarking is incompletely described, and that the codebase is not fully updated, thus contributing to this disparity in results.)
We should use Siege 2.69, not 2.66, for more accurate benchmarking of baseline responsiveness. If we notice that HTML is slower than PHP, it’s a sign that something is wrong.
Symfony 2 preloads its foundation and application classes, something no other framework does in the benchmarked code. When we treat Solar and Symfony 2 the same way, by preloading the foundation classes for each, we find that Solar is roughly 28% faster than Symfony 2.