Long-time readers will recall that I am interested in performance benchmarks as a tool to help discover the outer limits of framework responsiveness. See the blog category along with the most recent measurement report and the GitHub repo for replicating the results on your own.
Benchmarking is useful because it helps you decide if it makes more sense to work on improving your application, improving your framework, or improving your server. If the maximum dynamic responsiveness of the framework in question is 200 req/sec, and you need 300 req/sec, then there is no code you can add in your application to make it go faster in a dynamic scenario; you will have to look at the framework or the server. (Lazy reader who considers himself an unappreciated genius interjects: “But you can cache it!” I said a dynamic scenario, not a static one.)
There’s a lot of emotion and drama associated with benchmarking. The subjects that come in “first place” too often point to it as SCIENCE PROVES WE ARE BEST and the subjects that come in “last place” respond with variations of THIS IS STUPID AND PROVES NOTHING. (That is, until the last-placers run their own benchmarks where they come in first, and suddenly it’s a great marketing point. I’m looking at you, Symfony.) The point in benchmarking is to add information to your decision-making process so you can make better use of your limited resources of time and effort, and choose between competing tradeoffs in a more informed way. Speed alone over-and-above anything and everything is for suckers; it’s an important point, not the important point, when evaluating tradeoffs.
Leaving the elements of drama aside, benchmarking properly is difficult and time-consuming work. For my own limited benchmarks, it took three days or more to properly update, test, run, fix, and re-run to perform them well, even with automated scripts to do the setup and analysis. And that was for the most basic bare bones “hello world” that benchmarks only the dynamic dispatch cycle (bootstrap, front controller, page controller, action method, and view rendering).
Enter the guys at TechEmpower.
They’re doing a series of regular benchmarks that includes not just a double-handful of PHP frameworks, but 90 frameworks/languages/foundations across several languages. They do the basic “hello world” bench in addition to a few others, such as ORM/database speed. They appear to share an approach similar to the one I first published in Nov 2006 and improved with the help of Clay Loveless in Jan 2007. The TechEmpower motivations appear to the be similar to mine as well. They have equalled and then exceeded the efforts that I’ve been able to put forth on my own. From what I can tell it’s really good work.
With that, I am happy to say that I will be retiring my benchmarking project in favor of the TechEmpower one. Until futher notice, I’ll be combining my efforts (such as they may be) with the TechEmpower folks.