By popular request, here is an update of my web framework benchmarks report. You can see previous result sets here:
- How Fast Is Your Framework?
- New Year’s Benchmarks
- Framework and Application Benchmarking slides at DCPHP 2007
Before you comment on this post, please have the courtesy to read at least the first two articles above; I am tired of refuting the same old invalid arguments about “hello world makes no sense”, “if you cache, it goes faster”, “the ORM systems are different”, and “speed isn’t everything” with people who have no understanding of what these reports actually say.
Full disclosure: I am the lead developer on the Solar Framework for PHP 5, and I was an original contributor to the Zend framework.
In the interest of putting to rest any accusations of bias or favoritism, the entire project codebase is available for public review and criticism here.
Flattered By Imitators
They say that imitation is the sincerest form of flattery. As such, I am sincerely flattered that the following articles and authors have adopted methodologies strikingly similar to the methodology I outlined in Nov 2006.
- SellersRank here and here.
- AVNet Labs here.
- Rasmus Lerdorf here. I am considering writing a separate post about this talk by Rasmus.
Methodology, Setup, and Source Code
The methodology in this report is nearly identical to that in previous reports. I won’t duplicate that narrative here; please see this page for the full methodology.
The only difference from previous reports regards the server setup. Although I’m still using an Amazon EC2 instance, I now provide the full setup instructions so you can replicate the server setup as well as the framework setup. See this page for server setup instructions.
Finally, you can see all the code used for the benchmarking here.
Results, Part 1
Update: FYI, opcode caching is turned on for these results.
The “avg” column is the number of requests/second the framework itself can deliver, with no application code, averaged over 5 one-minute runs with 10 concurrent users. That is, the framework dispatch cycle of “boostrap, front controller, page controller, action method, view” will never go any faster than this.
The “rel” column is a percentage relative to PHP itself. Thus, if you see “0.1000″ that means the framework delivers 10% of the maximum requests/second that PHP itself can deliver.
We see that the Apache server can deliver 2300 static “hello world” requests/second. If you use PHP to
echo "Hello World!" you get 1300 requests/second; that is the best PHP will get on this particular server setup.
Cake: After conferring with the Cake lead developers, it looks like the 1.2 release has some serious performance issues (more than 50% drop in responsiveness from the 1.1 release line). They are aware of this and are fixing the bugs for a 1.2.0-rc3 release.
Solar: The 1.0.0-alpha1 release is almost a year old, and while the unreleased Subversion code is in production use, I make it a point not to benchmark unreleased code. I might do a followup report just on Solar to show the decline in responsiveness as features have been added.
Symfony: Symfony remains the least-responsive of the tested frameworks (aside from the known-buggy Cake 1.2.0-rc1 release). No matter what they may say about Symfony being “fast at its core”, it does not appear to be true, at least not in comparison to the other frameworks here. But to their credit, they are not losing performance. (Could it be there’s not much left to lose? In addition, I continue to find Symfony to be the hardest to set up for these reports — more than half my setup time was spent on Symfony alone.
Zend: The difference between the 1.0 release and the 1.5 release is quite dramatic: a 25% drop in responsiveness. And then another 10% drop between 1.5 and 1.6.
To sum up, my point from earlier posts that “every additional line of code will reduce responsiveness” is illustrated here. Each of the newer framework releases has added features, and has slowed down as a result. This is neither good nor bad in itself; it is an engineering and economic tradeoff.
Results, Part 2
I have stated before that I don’t think it’s fair to compare CodeIgniter and Prado to Cake, Solar, Symfony, and Zend, because they are (in my opinion) not of the same class. Prado especially is entirely unlike the others.
Even so, I keep getting requests to benchmark them, so here are the results; the testing conditions are idential to those from the main benchmarking.
CodeIgniter: Even the CI folks are not immune to the rule that “there is no such thing as a free feature”; between 1.5.4 and 1.6.2 releases they lost about 18% of their requests/second. However, they are still running at 14.5% of PHP’s maximum, compared with the 11.68% of Solar-1.0.0-alpha1 (the most-responsive of the frameworks benchmarked above), so it’s clearly the fastest of the bunch.
Prado: Prado works in a completely different way than the other frameworks listed here. Even though it is the slowest of the bunch, it’s simply not fair to compare it in terms of requests/second. If the Prado way of working is what you need, then the requests/second comparison will be of little value to you.
This Might Be The Last Time
Although I get regular requests to update these benchmark reports, it’s very time-consuming and tedious. It took five days to prepare everything, add new framework releases, make the benchmark runs, do additional research, and then write this report. As such, I don’t know when (if ever) I will perform public comparative benchmarks again; my thanks to everyone who provided encouragement, appreciation, and positive feedback.