New Year's Benchmarks

By | January 1, 2007

After the previous round of benchmarking, I received one very good criticism from Matthew Weier O’Phinney about it. He suggested that the hardware I was using, a PowerPC G4 Mac Mini, had an I/O system that was not representative of what a "regular" user would have as a web server. I have to agree with that.

As such, I have prepared a new set of benchmarks for Cake, Solar, Symfony, and Zend Framework using an almost identical methodology as last time.

Much of this article is copied directly from the previous one, as it serves exactly the same purpose, and little has changed in the methodology. Please consider the older article as superseded by this one.

Introduction

If you like, you can go directly to the summary.

This report outlines the maximum requests-per-second limits imposed by the following frameworks:

The benchmarks show what the approximate relative responsiveness of the framework will be when the framework’s controller and view components are invoked.

Full disclosure: I am the lead developer of one of the frameworks under consideration (Solar), and I have contributed to the development of another (Zend Framework). I have attempted not to let this bias my judgment, and I outline my methodology in great detail to set readers at ease concerning this.

Why These Particular Frameworks?

In my opinion, the frameworks in this comparison are the most full-featured of those available for PHP. They make good use of design patterns, provide front and page controllers, allow for controller/view separation, and offer an existing set of plugins or added functionality like caching, logging, access control, authentication, form processing, and so on.

Some Code Igniter folks left comments previously asking that I include CI in the benchmark process. I don’t mean to sounds like a jerk, but after reviewing the CI code base, it is my opinion (note the small "o") that it is not in the same class as Cake, Solar, Symfony, and Zend. I won’t go into further detail than that; I think reasonable and disinterested observers will tend to agree. Code Igniter is indeed very fast, but as I note elsewhere, there is more to a framework than speed alone.

Regarding frameworks in languages other than PHP, I don’t have the expertise to set them up for a fair comparison. However, you can compare the results noted in this report with this other report. Perhaps by extrapolation, one can estimate how Rails and Django compare to the other PHP frameworks listed here.

Methodology

For each of the frameworks tested, I attempted to use the most-minimal controller and action setup possible, effectively a "Hello World!" implementation using the stock framework components and no configuration files (or as few as the framework would let me get away with). This was the only way I could think of to make sure the tests were identical on each framework.

The minimalist approach measures the responsiveness of the framework components themselves, not an application. There’s no application code to execute; the controller actions in each framework do the least possible work to call a view. This shows us the maximum possible throughput; adding application code will only reduce responsiveness from this point.

In addition, this approach negates most "if you cache, it goes faster" arguments. The "view" in each case is just a literal string "Hello World!" with no layout. It also negates the "but the database connection speed varies" argument. Database access is not used at all in these benchmarks.

Server

Previously, I ran the benchmarks on a Mac Mini G4; as noted by others, it is not exactly a "real" production server. With that in mind, these new benchmark tests were run on an Amazon EC2 instance, which is a far more reasonable environment:

  • 1.7Ghz x86 processor
  • 1.75GB of RAM
  • 160GB of local disk
  • 250Mb/s of network bandwidth.

Clay Loveless set up the instance with …

  • Fedora Core 5
  • Apache 2.2 and mod_php
  • PHP 5.2.0 with Xcache (64M RAM)

Setup

Each framework benchmark uses the following scripts or equivalents …

  • Bootstrap file
  • Default configuration (or as close as possible)
  • Front-controller or dispatcher
  • Page-controller or action-controller
  • One action with no code, other than invoking a View processor
  • Static view with only literal text "Hello World!"

… so the benchmark application in each case is very small.

I did not modify any of the code in the frameworks, except for one condition. I modified the bootstrap scripts so they would force the session_id() to be the same every time. This is because the benchmarking process starts a new session with each request, and that causes responsiveness as a whole to diminish dramatically.

Benchmarking Tools

I wrote a bash script to automate the benchmarking process. It uses the Apache benchmark "ab" tool for measuring requests-per-second, on localhost to negate network latency effects, with 10 concurrent requests for 60 seconds. The command looks like this:

ab -c 10 -t 60 http://localhost/[path]

The benchmark script restarts the web server, then runs the "ab" command 5 times. This is repeated for each version of each framework in the test series, so that each gets a "fresh" web server and xcache environment to work with.

Benchmark Results

Baseline

First, we need to see what the maximum responsiveness of the benchmark environment is without any framework code.

framework 1 2 3 4 5 avg
baseline-html 2613.56 2284.98 2245.98 2234.94 2261.01 2328.09
baseline-php 1717.74 1321.49 1292.86 1511.40 1327.35 1434.17

The web server itself is capable of delivering a huge number of static pages: 2328 req/sec for a file with only "Hello World!" in it.

Invoking PHP (with Xcache turned on) to "echo ‘Hello World!’" slows things down a bit, but it’s still impressive: 1434 req/sec.

Cake

I ran benchmarks for Cake with the debug level set to zero. I edited the default bootstrap script to force the session ID with this command at the very top of the file…

<?php
session_id('abc');
?>

… but made no other changes to the distribution code.

The page-controller looks like this:

<?php
class HelloController extends AppController {
    var $layout = null;
    var $autoLayout = false;
    var $uses = array();
    function index()
    {
    }
}
?>

The page-controller automatically uses a related "index.thtml" view.

Running the benchmarking script against each version of Cake gives these results:

framework 1 2 3 4 5 avg
cake-1.1.10 85.65 85.87 85.71 85.66 85.93 85.76
cake-1.1.11 113.78 114.13 113.59 113.35 113.73 113.72
cake-1.1.12 114.62 114.26 114.55 113.89 114.64 114.39

So the most-recent release of Cake in a fully-dynamic mode has a top-limit of about 114 requests per second in the benchmarking environment.

Solar

The Solar bootstrap file looks like this; note that we force the session ID for benchmarking purposes.

<?php
session_id('abc');

error_reporting(E_ALL|E_STRICT);
ini_set('display_errors', true);

$dir = dirname(__FILE__) . DIRECTORY_SEPARATOR;

$include_path = $dir . 'source';
ini_set('include_path', $include_path);

require_once 'Solar.php';

$config = $dir . 'Solar.config.php';
Solar::start($config);

$front = Solar::factory('Solar_Controller_Front');
$front->display();

Solar::stop();
?>

The Solar page-controller code looks like this:

<?php
Solar::loadClass('Solar_Controller_Page');
class Solar_App_HelloMini extends Solar_Controller_Page {
    public function actionIndex()
    {
    }
}
?>

The page-controller automatically uses a related "index.php" file as its view script, and uses no layout.

Running the benchmarking script against each version of Solar gives these results:

framework 1 2 3 4 5 avg
solar-0.25.0 170.80 169.22 170.29 170.75 170.22 170.26
solar-svn 167.83 166.97 167.56 164.56 162.30 165.84

Clearly the current Subversion copy of Solar needs a little work to bring it back up the speed of the most recent release, but it can still handle 165 requests per second in the benchmarking environment.

Symfony

The Symfony action controller code looks like this:

<?php
class helloActions extends sfActions
{
  public function executeIndex()
  {
  }
}
?>

The action controller automatically uses a related "indexSuccess.php" file as its view script. However, there is a default "layout.php" file that wraps the view output and adds various HTML elements; I edited it to look like this instead:

<?php echo $sf_data->getRaw('sf_content') ?>

If there is some way to make Symfony not use this layout file, please let me know.

Finally, I added session_id('abc'); to the top of the web/index.php bootstrap script to force the session ID to be same for each request.

Running the benchmarking script against each version of Symfony gives these results:

framework 1 2 3 4 5 avg
symfony-0.6.3 53.10 53.15 52.27 52.12 52.10 52.55
symfony-1.0.0beta2 67.42 66.92 66.65 67.06 67.83 67.18

It looks like the most-recent beta version of Symfony can respond to about 67 requests per second in the benchmarking environment.

Zend Framework

As before, the Zend Framework involves some extra work. As others have noted, Zend Framework requires more putting-together than the other projects listed here.

The Zend 0.2.0 bootstrap script looks like this …

<?php
session_id('abc');

error_reporting(E_ALL|E_STRICT);
ini_set('display_errors', true);
date_default_timezone_set('Europe/London');

$dir = dirname(__FILE__) . DIRECTORY_SEPARATOR;

set_include_path($dir . 'source/library');

require_once 'Zend.php';
require_once 'Zend/Controller/Front.php';

Zend_Controller_Front::run($dir . 'application/controllers');
?>

… and the Zend 0.6.0 bootstrap looks like this:

<?php
session_id('abc');

error_reporting(E_ALL|E_STRICT);
ini_set('display_errors', true);
date_default_timezone_set('Europe/London');

$dir = dirname(__FILE__) . DIRECTORY_SEPARATOR;

set_include_path($dir . 'source/library');

require_once 'Zend.php';
require_once 'Zend/Controller/Front.php';

$front = Zend_Controller_Front::getInstance();
$front->setControllerDirectory($dir . 'application/controllers');
$front->setBaseUrl('/zend-0.6.0/');

echo $front->dispatch();
?>

The differences are minor but critical, since the front-controller internals changed quite a bit between the two releases.

Similarly, the page-controller code is different for the two releases as well. Zend Framework does not initiate a session on its own, whereas the other frameworks do. I think having a session available is a common and regular requirement in a web environment. Thus, I added a session_start() call to the page-controller for Zend; this mimics the session-start logic in the other frameworks. Also, the Zend Framework does not automatically call a view, so the page-controller code below mimics that behavior as well.

The page-controller code for Zend 0.2.0 looks like this …

<?php
require_once 'Zend/Controller/Action.php';
require_once 'Zend/View.php';
class IndexController extends Zend_Controller_Action
{
    public function norouteAction()
    {
        return $this->indexAction();
    }

    public function indexAction()
    {
        // mimic the session-starting behavior of other application frameworks
        session_start();

        // now for the standard portion
        $view = new Zend_View();
        $view->setScriptPath(dirname(dirname(__FILE__)) . '/views');
        echo $view->render('index.php');
    }
}
?>

… and the page-controller code for Zend 0.6.0 looks like this:

<?php
require_once 'Zend/Controller/Action.php';
require_once 'Zend/View.php';
class IndexController extends Zend_Controller_Action
{
    public function indexAction()
    {
        // mimic the session-starting behavior of other application frameworks
        session_start();

        // now for the standard portion
        $view = new Zend_View();
        $view->setScriptPath(dirname(dirname(__FILE__)) . '/views');
        echo $view->render('index.php');
    }
}
?>

Running the benchmarking script against each version of Zend Framework gives these results:

framework 1 2 3 4 5 avg
zend-0.2.0 215.91 208.50 207.84 210.43 211.73 210.88
zend-0.6.0 133.60 134.18 132.10 129.53 130.16 131.91

In the benchmarking environment, the most-recent Zend Framework can handle about 131 requests per second.

But look at Zend 0.2.0 — 210 req/sec! Richard Thomas noted this in a comment on the last set of benchmarks. If that’s a sign of what a future release might be capable of, it’s quite stunning.

Summary

To compare the relative responsiveness limits of the frameworks, I assigned the slowest average responder a factor of 1.00, and calculated the relative factor of the others based on that.

framework avg rel
cake-1.1.10 85.76 1.63
cake-1.1.11 113.72 2.16
cake-1.1.12 114.39 2.18
solar-0.25.0 170.26 3.24
solar-svn 165.84 3.16
symfony-0.6.3 52.55 1.00
symfony-1.0.0beta2 67.18 1.28
zend-0.2.0 210.88 4.01
zend-0.6.0 131.91 2.51

Thus, the Zend 0.2.0 framework by itself is about 4 times more responsive than Symfony 0.6.3. (My opinion is that this is likely due to the fact that the Zend code does less for developers out-of-the-box than Symfony does.)

For comparison of the currently-available options, we can highlight the most-recent releases of each framework, and place them in order for easy reading:

framework avg rel
solar-0.25.0 170.26 3.24
zend-0.6.0 131.91 2.51
cake-1.1.12 114.39 2.18
symfony-1.0.0beta2 67.18 1.28

As we can see, the framework with the greatest limits on its responsiveness is Symfony. Zend is about 15% faster than Cake. And as with the last benchmark series, Solar has the least limit on its responsiveness in a dynamic environment.

You can download the individual benchmark log files, including the report summary table, here.

Conclusion

It is important to reiterate some points about this report:

  • The benchmarks note the top limits of responsiveness of the frameworks components themselves. This functions as a guide to maximum speed you will be able to squeeze from an application using a particular framework. An application cannot get faster than the underlying framework components. Even "good" application code added to the framework will reduce responsiveness, and "bad" application code will reduce it even further.

  • Each of these frameworks is capable of building and caching a static page that will respond at the maximum allowed by the web server itself (about 2300 requests/second, as noted in the baseline runs). In such cases, the responsiveness of the framework itself is of no consequence whatsoever. But remember: the moment you hit the controller logic of a framework, responsiveness will drop to the levels indicated above.

  • Dynamic responsiveness is only one of many critical considerations when choosing a framework. This benchmark cannot measure intangibles such as productivity, maintainability, quality, and so on.

Again, it is entirely possible that I have failed to incorporate important optimizations to one or more of the frameworks tested. If so, it is through ignorance and not maliciousness. If you are one of the lead developers on the frameworks benchmarked here and feel your project is unfairly represented, please contact me personally, and I’ll forward a tarball of the entire benchmark system (including the framework code as tested) so you can see where I may have gone wrong.

UPDATE (2007-01-05): I have created a repository of the benchmark project code for all to examine.

91 thoughts on “New Year's Benchmarks

  1. Tomek

    Cool, now I would like to read a detailed article describing why Solar is that fast :)

    Reply
  2. Lukas

    I see very little value in benchmarks like these. Paul did mention one of the key differentiating factors: “My opinion is that this is likely due to the fact that the Zend code does less for developers out-of-the-box than Symfony does.”

    The only really useful thing is detailed profiling that actually goes over what each of the frameworks do and how much each of these convenience features cost and if you can disable them if you dont need them.

    I remember back when people where throwing around the ADODB benchmarks I looked at why MDB2 was slightly slower on some drivers. The reason always turned out to be an additional check here or there to ensure better portability (obviously a must have for a portability layer). So in the end I discovered that all is good, even if ADODB did come out on top slightly in some benchmarks.

    Reply
  3. pmjones Post author

    @Lukas — the value is in knowing how much faster you can make application code that depends on a particular framework. There is always a limit to responsiveness, whether it is the web server, PHP, the framework code, the application code, etc. If your application is based on Framework X, and it’s not running as fast as you’d like, it’s good to know the theoretical top limit, so you can decide if that top limit will be enough. That way you’re not trying get more from the code than is imposed by the framework itself. If it’s not enough, it’s time to choose a different component (server, language, framework, or something else) — or it’s time to start scaling in some fashion.

    Reply
  4. Richard Thomas

    Im not very familiar with the inner workings of the other frameworks, do they have there own session handlers?

    I did some basic session benchmarking and just turning on sessions had very little effect at all on php performance, 1690-1740rps without 1550-1643rps with sessions.

    I would be interested how much “data” is being shoved into the session file by default, that could make a big diffrence in performance.

    Reply
  5. Pingback: Paul M. Jones » Blog Archive » How Fast Is Your Framework?

  6. pmjones Post author

    @Richard Thomas — Of the three, Zend is the only one without a session handler or session class of some sort in its most-recent release. I am told that there is a Zend_Session in the incubator, but I have no information as to its status or quality.

    Reply
  7. Jeff Moore

    Interesting.

    After having benchmarked under both linux/intel and Mac/PPC, were any of the results significantly different? For example, was the relative performance of any of the frameworks noticeably different on the PPC? In other words, did benchmarking under Mac/PPC actually skew the results of the first benchmarks?

    Reply
  8. Derek Allard

    Paul, well laid out and well thought out, and I appreciate you stating your bias right up front (we should all be so forward). I can only imagine the time you must have sunk into this post for planning, executing, and reporting it. Thanks. I have nothing but respect for the developers of all these projects, and you in particular.

    I’ve used most of these frameworks, and agree that there is more to a framework then raw speed. I’m one of those Code Igniter guys you mention. While I have absolutely no interest in starting a debate or thread-jacking this post, I would love to hear more about your thoughts with respect to what separates these projects from Code Igniter. Perhaps another post, or a private email?

    Reply
  9. NiKo

    For Symfony, it’s Propel (the ORM layer) which is very heavy. Try Doctrine (Symfony 1.0 will be ORM independant and a Doctrine plugin will be available), performances should be better…

    And anyway, I always prefer buy more hardware to ensure scability than spend time to maintain a bloated over-optimized code. That said, Symfony is really a pleasure to code with :-)

    Reply
  10. Pingback: PHPDeveloper.org

  11. pmjones Post author

    @Niko — The benchmarks do not use the database layer at all. To my knowledge, Symfony does not even load the Propel classes, and it certainly does not connect to the database. So changing the database layer will not change the performance noted here.

    You said, “And anyway, I always prefer buy more hardware to ensure scability than spend time to maintain a bloated over-optimized code.” Two things:

    1. You can buy less hardware if the software is more responsive. Granted, hardware costs are generally less expensive than developer time, but if your developers are working for free (as is often the case in the open-source world) then you might not have enough money to pay for extra servers, either. Either way, knowing the top limits of responsiveness is good information to have when making resource decisions.

    2. Given the above benchmarks, Symfony doesn’t seem to be optimized very much. Even if it’s optimized for a particular case, I don’t think it can get any faster (relative to the other frameworks) than what I’ve noted here, unless you change how Symfony itself works under-the-hood.

    Reply
  12. NiCoS

    Hi,

    I’m not sure such a benchmark make sense. As the conclusion of such a benchmark would be to give up any frameworks and go on with procedural code.

    A simple “” would beat them all.

    Performance are a issue for framework but not the only one. I think a real usefull benchmark should also take into account parameters like time required to develop an app, required skills, documentation, etc.

    A framework can be very fast and provide quite nothing from scratch where as a more complete framework can be slower but provide you all you need and reduce time for developpement. Personnaly I think I would choose a framework more for what it provides me (compare to what I need) than how fast it is.

    Using symfony for building some entreprise application make sense whereas using another one does not or would cost more money to customers.

    Reply
  13. pmjones Post author

    @NiCoS —

    You said, “I’m not sure such a benchmark make sense. As the conclusion of such a benchmark would be to give up any frameworks and go on with procedural code.”

    I disagree; or rather, I might agree *if speed was the only deciding factor.* Instead, speed is one of many factors.

    In addition, we must compare (as much as we can) “like with like”. The goal here is not to compare the speed of “hello world” implementations; it is to compare the speed of controller and view implementations in various frameworks.

    You said, “I think a real usefull benchmark should also take into account parameters like time required to develop an app, required skills, documentation, etc.” I agree that it would be *a* useful report, but it is not the *only* useful report. Speed is an important factor when making resource decisions. Also, it is extra-ordinarily difficult to control for variables in the kind of experiment you outline; I look forward to seeing your plan on how to do perform and analyze such an experiment.

    FInally, you say: “Using symfony for building some entreprise application make sense whereas using another one does not or would cost more money to customers.” I’m sorry, but you have no proof of this assertion, nor any reasonable plan to quantify the costs in a way that controls for the variables involved. I’ll grant that you may *feel* more productive, but then, one’s feelings are not proof. It seems to me that the frameworks listed here have very similar qualities (with the possible exception of Zend), *and* are faster than Symfony at execution time.

    Reply
  14. Travis Swicegood

    Two things:

    1. Have you considered creating a tarball of your experimental code? It would be nice to be able to reproduce this test in different environment. Maybe a Google Code repo with externals to all of these projects so creating patches would be easy. In any form, having this all in one place would make it easy to drop in additional frameworks to see how they perform by comparison.

    2. I think NiCoS is on to something. Granted, your baseline does provide interesting information for the absolute speed of a framework, but very few people are going to use it at that level. I think a simple app that loads a database table and displays 10 – 20 items in a grid/list type manner would be a worthwhile experiment. Even the fastest base framework and start sucking wind if the DB layer is bloated.

    Obviously, in order to take full advantage of each of the frameworks, you would need someone with experience with each of the framework to do all of them justice, but you seem to have garnered responses from people familiar with all of these frameworks (minus Cake). I imagine some ground rules could be laid out and you could have a few working list displays within a week.

    Reply
  15. Felix Geisendörfer

    NiCoS: I think you are missing the point. Choosing the right tool for the right task is a process where you have to consider many facts and as far as choosing a good framework goes, speed is one of them. If all you develop are small sites that get less then 10-15k of unique visitors a month all of the above frameworks should be fast enough to handle them on a shared host easily and you should really not look into performance optimization at all. If however you know (not hope) that the site you are building will get a lot more traffic then benchmarks as provided by Paul can be *very* helpful to make a smart decision. Oh and please don’t forget the developers of the listed frameworks, I know that they tend to pay close attention to benchmarks like this and most likely the Symfony folks will look for bottlenecks in their code in the future ; ).

    Reply
  16. pmjones Post author

    @Travis —

    > 1. Have you considered creating a tarball of your experimental code?

    I say at the very end, “If you are one of the lead developers on the frameworks benchmarked here and feel your project is unfairly represented, please contact me personally, and I’ll forward a tarball of the entire benchmark system (including the framework code as tested) so you can see where I may have gone wrong.”

    > In any form, having this all in one place would make it
    > easy to drop in additional frameworks to see how they perform by comparison.

    I agree; I already have it in one place, but it’s never easy to “drop in” a new framework into the benchmark scaffold — even when it’s a new version of an existing framework. You underestimate the amount of work involved in adding more people to the project; cf. Brooks’ Law.

    Having said that, I am considering a public read-only repository, but I don’t know if/when/where I will make it available.

    > 2. I think NiCoS is on to something. Granted, your baseline does provide
    > interesting information for the absolute speed of a framework, but very few
    > people are going to use it at that level.

    Of course they’re not, and I say as much. The point is to know the top limit of the framework logic, so you don’t think you can optimize your application any more than is allowed by the framework itself.

    > Obviously, in order to take full advantage of each of the frameworks, you
    > would need someone with experience with each of the framework to do all of
    > them justice, but you seem to have garnered responses from people familiar
    > with all of these frameworks (minus Cake). I imagine some ground rules could
    > be laid out and you could have a few working list displays within a week.

    Your imagination is overly optimistic about the time and effort involved, although I laud your desire to assign me more labor for your own curiosity.

    In addition, I have been in communication with Zend leads, Cake leads, and CI experts — just not via blog comments.

    Reply
  17. pmjones Post author

    @Joe:

    > You can turn off layout in symfony by changing ‘has_layout’ in
    > myproject/apps/myapp/config/view.yml.

    Thanks for this, I’ll try it out. My guess is it will improve responsiveness, but not enough to change Symfony’s ranking; it needs another 41 percentage points to match the next-fastest framework (Symfony’s 67 req/sec is only 59% of Cake’s 114 req/sec). I guess we’ll see.

    Reply
  18. NiKo

    +1 for the source code of test apps.

    BTW, did you deactivate web debug toolbar and dev level logs in your Symfony test app ? It consumme *huge* amount of processing time…

    Reply
  19. Mariano Iglesias

    @NiKo: I think the whole idea of this benchmark as stated originally was to start a very basic Hello World application from scratch on each framework and test initial responsiveness. I bet there are several ways that each framework could be optimized to perform these tests better (such as your symfony suggestions) but that’s not the idea. It wouldn’t be fair to start “disabling” thinks on Symfony to make it run faster but leave the other frameworks as-is.

    It is nice to see how CakePHP is improving performance on each version. Going from 85.76 reqs/sec on CakePHP 1.1.10 to 114.39 reqs/sec on 1.1.12 is an incredible improvement.

    Thanks for putting up this report Paul.

    Reply
  20. Abus

    Seen on all page footers of solarphp.com :

    “Copyright © 2005-2007, *Paul M. Jones* and other contributors.”

    Do I need to say more about the objectivity of these benchs, especially on Solar ones ?

    Reply
  21. pmjones Post author

    @Abus,

    I was careful to give full disclosure in the report’s Introduction …

    > Full disclosure: I am the lead developer of one of the frameworks
    > under consideration (Solar), and I have contributed to the
    > development of another (Zend Framework). I have attempted not
    > to let this bias my judgment, and I outline my methodology in great
    > detail to set readers at ease concerning this.

    Aside from this, do you see anything in the methodology or execution that would tend to favor one framework over another?

    Reply
  22. Gregory

    Well, I’m using one of these frameworks at work (which means we care pretty much about developements costs) and It’s not the fastest. I have developed many websites with it, some very small websites with a thousand pages seen per month to much bigger ones with millions pages seen per month.

    In any case, the maximum speed of framework was an issue. The smallest websites haven’t enough visitors to spend time to optimise them a lot. They’re very fast providing hundreds of visitors don’t try to access it in the same time (which is not the case anyway). but I know from experience that they could be optimized in a few days.

    The point is, before thinking about optimisation I consider using the one I’m the most comfortable with. IMHO, The goal of these frameworks is to provide a lot of features that will make our life easier for us to spend time on adding new features rather than rewriting everything from scratch because it’s faster. There will always be many ways to optimise the page load time later.

    Then PHP is maybe not the right choice to make if we want a good framework that provides good speed. But afterall, aren’t we using it because it’s the one we’re the most comfortable with for now ?

    Reply
  23. noel

    I really feel that these hello world benchmarks are not good real world test cases. The framework that performs that best in a completely stripped down implementation may perform the worst when you start really using features. Symfony for example provides a huge amount of functionality, and it doesn’t look like you disabled a lot of internal processing such as logging and whatnot. By the way to disable the layout view in symfony for a page use view.yml file and set hasLayout: off which is very clearly described in their documentation.
    I think you can also do it directly in the action with $this->setLayout(false) or something quite similar.

    One of my gripes with php vs other language benchmarks is that they always test the simplest situations in which perl, python, etc. don’t require any extra libs to be loaded. PHP autoloads a lot of functionality which is a definite drawback in a small test case. But when you fire up a full fledged web based application, you will very likely find that php is not so bad in comparison when you start loading up 10 or more lib packages with perl or python, especially DBIs.

    The same thing applies to a comparison of frameworks within php. To really get a test that means something, develop the same full app in each that really makes use of what they all have to offer. Sure that’s asking a lot, but until someone goes to that trouble, I don’t think we are really that close to the truth here.

    I also applaud how forthcoming you are about all this. It’s very noble of you. But this feels like pseudo-science to me in that people are going to make major decisions based on what little criteria they can find to make such decisions and you present this data in a very concrete way, when it’s still quite ambiguous. Regardless, a start in the right direction.

    Reply
  24. Rob

    I am using one of these frameworks at work as well.

    One of factors that restricted my choices was that we only run PHP4 on our servers. Of these four frameworks, CakePHP is the only one that will run on PHP4 and PHP5 servers. They will undoubtedly have had to sacrifice some performance to achieve this (implementing singletons in PHP4 is definitely slower than PHP5).

    Reply
  25. Luis

    Thank you for taking the time to do these benchmarks. I do think they’re useful if used correctly. They are honest and clear, and not designed to prove that A is faster than B.

    One question: Why do you think there is such inconsistency between your previous tests and these ones? Can just the different hardware explain, for example, that while Symfony 0.6.3 was 53% faster than Cake 1.1.10 in those, in these new ones Cake 1.1.10 is 63% faster than Symfony 0.6.3? I would think that the sessions problem had an impact in the previous test for Cake, but since Solar didn’t have problems I can’t understand it.

    Reply
  26. pmjones Post author

    @Luis — That is an excellent question, and it’s been nagging at me.

    I’m afraid I do not have a good answer for why there is such a disparity between the “G4 Mac Mini desktop” relative results and the “Intel EC2 server” relative results, other than “they’re different platforms.”

    As you suggest, I think it can be chalked up to the combination of I/O system, memory handling, and processor differences, as well as the difference in operating systems. They are *vastly* different environments, and the EC2 instance is a far more powerful system for serving.

    The possibility for differences from platform variations is why I wanted to run the previous series along with the new series (i.e., the old versions plus the new versions) so that we could compare “like with like”. In addition, I have marked the old entry as “superseded”, and although I think the methodology is still valid, the relative numbers are more suspect.

    I cannot say for certain that the platform differences are the cause, but it seems reasonable to think so. It is possible that I did something different each time, but I can’t think of what it might have been. I am happy to hear more argument and discussion on this so I can discover any errors on my part.

    Lesson learned: benchmark on the system most resembling the target environment, not on what happens to be convenient.

    Reply
  27. Luis

    Yes, I guess the different platform (hardware+OS) is the only explanation, since the programs tested and the method used were the same. It’s still quite a mazing that two PHP programs that do something similar have such a different performance in both platforms, but I completely agree it’s this last test the one that should be taken as the most realistic one.

    Thanks for your thoughts.

    Reply
  28. Lukas

    Paul: I dont think you are being honest here. If all you are interested is the theoretical maximum so that developers know when they cant hope for more, then I wonder why you are comparing at all! In some other comments you then say that symfony does not seem optimized at all. This brings us back to my original comment .. the question is how much does each of these framework do for you out of the box, can you disable things you dont want done and how much of the default stuff do you use etc.

    Again like you realized yourself, more mature full featured frameworks tend to be slower even though they also had more time to optimize things. If that is just feature creep or actual productivity improvements is another story.

    Reply
  29. pmjones Post author

    @Lukas — accusations of “dishonesty” are beneath you, and beneath dignifying with a response.

    Reply
  30. Lukas

    Well maybe you are not consciously being dishonest. But at least subconsciously I think you are pushing an agenda. Anyways I stated clearly how I came to this observation and you are obviously free to disagree with the observation and the conclusion. The word dishonest was probably a too harsh a choice, I was not trying to imply that you are trying to misinform the public.

    Anyways, you have provided information and several people (including you) have stated their view about the relevance of this data. Lets leave it at that.

    Reply
  31. pmjones Post author

    @Lukas — So now I’m unaware of when I’m lying; that’s *much* better. Make poisonous accusations on your own blog, not in my comments.

    Reply
  32. nate

    @Lukas: Paul has been nothing short of completely forthcoming about the approach and methodology of these benchmarks, and has taken great pains to involve myself and other framework developers in the process, and to ensure that each receives equal treatment, and is fairly represented.

    Paul has also been 100% up-front about the intent of the benchmarks, and the context in which they should be evaluated. He has also provided all the information and resources necessary for anyone, anywhere to reproduce them independently.

    As anyone who does these things knows, they take a lot of time to set up, get right, and measure. But, as always, no good deed goes unpunished. Hopefully the trolls won’t serve as a detractor to future readers who could benefit from this information.

    @Lukas and the rest of the trolls: go back to your caves. If you decide to come out again, do the rest of us a favor and try to pay attention.

    Reply
  33. Lukas

    I guess I am just always very weary of benchmarks. I think the word “dishonest” was not appropriate in a public forum like this as it probably comes across a lot more harsh than I intended. Sometimes I forget how words come across when you cannot look into each others faces. English is also not my primary language, so sometimes I miss minor aspects.

    The point I was trying to make was only that I felt that with the stated intent its not necessary to talk about comparing at all. The actual results will certainly be useful for people using either of the benchmarked frameworks in order to figure out how far away they are from their theoretical limits.

    So thanks for providing the data, I will be sure to lower my paranoia parser when looking at benchmarks and I will also be sure to make better use of potentially insulting words in the future. Like I said, feel free to delete any words you feel insulting. As you rightly point out, its your blog.

    Reply
  34. pmjones Post author

    @Lukas —

    I don’t delete comments simply for being insulting to me personally (although I do reserve the right to delete inciteful or loathesome posts).

    It is my blog, but I also offer it as a place of public commentary. It is important that all related comments be retained as part of the history of the article.

    Reply
  35. Pingback: Paul M. Jones » Blog Archive » Benchmark Project Code Available

  36. Mariano Iglesias

    @Lukas: “I guess I am just always very weary of benchmarks”

    How often does this happen on the IT world… People who hate Windows go into Windows forums to rant about how bad windows is… People who feel weary about benchmarks start a discussion on a benchmark related post…

    I’m all for freedom of speach and all, but this is just wasting everyone else’s time. If you don’t believe in benchmarks or if you believe they are biased then just post a message on your own blog, but don’t fill this blog with multiple comments that just make people like me (who are interested in what people have to say about the results, not about how bad they feel benchmarks are) loose valuable time.

    I believe your time is as valuable as mine, so save both our times.

    Reply
  37. Rodrigo Moraes

    Paul, don’t understand me wrong, but I hope the Solar development will never be concerned with performance in the first place. I know it is not, but I mean, I hope that these benchmarks don’t influence the development in a bad way. There are some areas in the framework that need improvements and they will have a performance cost. I refer indirectly to the routing mechanism, which I hope will be more flexible in the future. That said, these benchmarks are a impressive and very interesting work.

    Reply
  38. pmjones Post author

    @Rodrigo:

    > I hope that these benchmarks don’t influence the development in a bad way.

    Have no fear. :-) Speed is *a* good thing, but it is not the *only* good thing. If we can have convenience *and* speed, all the better.

    Reply
  39. Bob

    You said: “if you cache, it goes faster” and you don’t want to use any of the framework’s cache systems to be sure to compare what is comparable.

    There is one specificity of symfony, which is that the very first request on a website creates an optimized version of the configuration to speed things up and avoid parsing conf files each time. So if you want to measure the true inner speed of symfony, you should not measure the time elapsed between the first request and the 100th request, but the time elapsed between the second request and the 101th.

    Try it, you will see a big difference, since the first one is usually the slowest. I’m curious to see if you come to the same conclusions, because my personal experiance is that symfony is really fast for what it does. Can’t tell about the other frameworks, though, for symfony is perfectly fine for me.

    Reply
  40. momendo

    This test needs to move on to testing with making a database connection, do one read, one write, and one delete using MySQL. This will really open up the testing. Symfony uses YAML which I believe is slower.

    Reply
  41. pmjones Post author

    Hi Bob —

    If it were true that the time spent caching the configs affects the speed significantly, we would expect to see a dramatic difference between the first run series and the follwing runs (two through five).

    No such difference appears; in fact, the first run is sometimes faster than subsequent runs.

    Therefore, I find it hard to believe that Symfony’s relative slowness is due to parsing the config file on the first of at least 20000 requests (67 req/sec, times 60 sec/min, times 5 one-minute runs).

    Reply
  42. Bob

    Paul,

    I mean that the first request with an empty cache (i.e., after deleting the content of the cache/ forlder or calling the symfony cc command) is always slower than the following ones in symfony. Once the cache is generated once, there is normally no significative difference between pages.

    So if your test is with something in the cache/ folder, then it means you already made the first request and the cache is already generated.

    But if you tell me that the first request with an empty cache is actually faster than the following ones, that’s just the weirdest thing I’ve ever heard about symfony. On every platform where I tested it, the first request in production environment is about 10 times slower as subsequent ones.

    If it’s actually the case, then we should investigate further…

    Reply
  43. pmjones Post author

    Bob —

    Maybe I’m misunderstanding your point; allow me to re-state.

    The first request of the first run behaves as you note; that is, it parses the config files and caches them. The remaining requests (at least 19,999 of them) use the cached values. Even if the first request is 10x slower, that one aberration will have very little effect on the average of 20,000 requests.

    So: are you saying that Symfony’s parsing of the config files on that first request is slow enough to make 20,000 requests 41 percentage points slower than Cake? If so, I suggest you download the test suite (it’s linked at the end of the post) and attempt to set things up yourself to see exactly how slow the parsing really is.

    Reply
  44. Bob

    Paul,

    I’ve already got your point. The average speed on 20 000 requests shouldn’t change so much even if the first request is much slower. As I said in my first comment, I hadn’t counted the actual number of requests you made in your benchmark, and I though that out of 100 requests the difference would be noticeable. I understood that when you first answered me.

    Then I thought that there was something strange in your tests because you wrote that subsequent requests were slower than the first one. If they don’t, then there is no need to worry about it…

    Reply
  45. z_malloc

    I am curious about the accuracy of the test results not because of configuration and versioning, but rather the fact that they are being performed on a virtualized system. The very nature of virtualization dictates that system resources are always in a state of flux. I do not have intimate knowledge of how EC2 is configured, but I have my doubts that each client gets their own physical machine.

    If they don’t, there is know way you could reliably depend on the results being accurate as resources would be in contention while performing the benchmarks.

    Reply
  46. Tom

    think your tests are off… i seem to get WAY faster benchmarks for cakephp. maybe your computer is too slow, 1.75gb ram and 1.7ghz processor is a few years behind. I would suspect all numbers in this case to be increased but you make everything sound like they are all slow…just from general usage of these frameworks, i’ve personally seen faster results. — course theoretically with better equipment everything should simply get faster respectively but who knows, i’m not sure if i’m convinced or if this report would convince me to investigate some of the other frameworks. thank you though for your opinions. it did get me to look into some things.

    Reply
  47. pmjones Post author

    @Tom:

    > think your tests are off… i seem to get WAY faster benchmarks for cakephp.

    The test is a comparative one; as you note later, absolute figures will be
    different on different hardware, but compared to each other, my bet is that
    the results on different hardware will still be similar. One never knows
    until one runs all the tests on the same target machine.

    The real question is, did you run the benches for *all* the frameworks? If
    not, you have missed part of the point of the article.

    > maybe your computer is too slow, 1.75gb ram and 1.7ghz processor is a
    > few years behind.

    It’s an Amazon EC2 instance; try telling them that. ;-)

    > thank you though for your opinions. it did get me to look into some things.

    I’m glad you found the article useful.

    For good or ill, the numeric results are not my “opinions” — although I’ll
    grant that the methodology might be changed to suit anyone’s opinion on how to
    test a framework.

    Reply
  48. Adam Cassel

    Thank you Paul for the work you’ve done, and will likely now be “required” to continue :-) setting up the benchmark. Let me state my bias up front: In the interest of communal growth, learning, and support, I TRY to be aware of and to not project onto another person what my sense or belief of their “agenda” is outside of what they state themselves and how, in the best case scenario, full disclosure and chosen methodology, and the results of same, stack up against stated positions and opinions VERSUS the data driven results. I appreciate, and think it demonstrates character, and will underscore the point you make in your original work, and patiently in several follow-on replies, that you are open to opinions and suggestions around methodology as well as discussion of practical, demonstrable and repeatable experience, which, again my personal bias, is where engineering conversations and collegial activity best flourish. I want to also note that as a recent convert to Open Source in general, and to PHP/MySQL/Linux in particular (my background is in the big iron/data center driven giant corporate IT/Windows IT stack side of the house), you have allowed for a nice and active space to grow here on benchmarking. It has informed my learning curve significantly. As some of you may know, back when .NET and Java/Java J2EE were battling each other face to face in a very public fashion, something called the Pet Store blueprint application, a reference application, was released by Sun: http://java.sun.com/blueprints/code/ and http://java.sun.com/developer/releases/petstore/ . Microsoft responded with implementing that blueprint app in .Net, and termed it The Microsoft .NET Pet Shop: http://msdn2.microsoft.com/en-us/library/ms978492.aspx and http://www.hilbert.dk/thomas/pub/dev/j2eedotnetbench.pdf Quite a bit of very informative engineering data, as well as proscriptive best practices for benchmarking, came out of this gathering of forces around the Pet Store/Shop reference app. Agendas aside for a moment, there is powerful and useful information to be found here. I’m wondering if implementing a “controlled” small subset of the Pet Store/Shop reference apps, e.g. the Read/Write component of the db piece, and leaving aside for now the more transactional DB- focused parts, in the mentioned frameworks would be worth considering as a way to move forward the great work and the lively conversation you’ve started here. I would be interested in donating my “labor”, and not donating yours on your behalf!, to such an effort. Thank you for an illuminating morning here in MA.

    Reply
  49. Adam Cassel

    Adam here again: I neglected to leave the link for the .Net Pet Shop v 3 http://msdn2.microsoft.com/en-us/library/ms954623.aspx . This is a great read from the methodology and “strict architecture requirement for benchmarking” perspectives. I hope this subject continues to garner interest in the PHP community. I will also leave this link which points to a “bake off” of Open Source stacks on Linux as well as on Windows. Excellent piece and as objective as one could hope for from an industry mag, very impressive and useful knowledge. http://www.eweek.com/article2/0,1895,1983364,00.asp As far as full disclosure, as I said in my prior post, I have a long history of MSFT focused work, but a few months ago I took the plunge and “I’m all in” with LAMP. It behooves me to remember that integrity, for me simply the alignment of one’s stated and personal principles with one’s actions, requires that one must in the final analysis put one’s money where their mouth is. And to pre-emptively ward off any confusion about where I am coming from, as an engineer, the stack I work on today, and the tools I use to get that work done, are not my religion, nor even my political persuasion – and we should all take care to remember that being inclusive of divergent opinions and positions is not only the hallmark of innovation, but of an Open Society, http://en.wikipedia.org/wiki/The_Open_Society_and_Its_Enemies , and I dare say, of the Open Source movement as well.

    Reply
  50. Manu

    its o k but no stuff for a Symfony beginner……………

    Reply
  51. kabir rakholiya

    well sir,

    I am very new to the web programming world and try hard for
    configure my windowsxp for symfony project but did not able to
    do it.. can you guide my email is available with this comment.

    this blog is really very usefull blog in simply i can say a big river
    of knowledge.

    Reply
  52. Pingback: Paul M. Jones » Blog Archive » A Bit About Benchmarks

  53. Pingback: O meu blog » O homem enervou-se...

  54. Pingback: Loud Baking » Blog Archive » Choosing a development framework

  55. Brian Reich

    I’d love to see this benchmark re-run on the current version of the Zend Framework, which I believe is at 1.1. I think the structure of it’s MVC structure changed significantly since 0.2 and 0.6. It’s a lot more powerful, but it seems to have inherited the bloat that generally comes with that.

    In addition: were your servers running any sort of PHP cache such as APC? Again, another interesting aspect to factor into benchmarking : )

    Great article!

    Reply
  56. geo

    i guess the only problem here is PHP itself and the OOP implementation. I run a few tests on cakephp. without framework i got 1800req/sec.
    I started from index.php and placed die() after some parts of the code including required files.
    I noticed the response time droped after the first require(”) of a big file. There was no other code in that file than function declarations. A bigger drop noticed after another require of a file containing just a class declaration.. After some such includes, the response time decreased under 500r/s.
    After the engine started actually doing something, the response time was around 100r/s. In the end i got 40r/s:(
    I think that PHP suffers from FileI/O operations and especially on big files. I tried APC and eAccelerator but the diference is not so encouraging.
    I also tried to include one of the big files into noframework app. The same result: response droped to 700r/s. With APC i have around 1000r/s. I think i`ll give up optimization and try to scale up.

    Reply
  57. Pingback: Paul M. Jones » Blog Archive » Memory Leaks With Objects in PHP 5

  58. Pingback: Toby de Havilland » PHP Framework Benchmarking Results

  59. pulponair

    Paul, nice overview. Don’t you thing it is time for another “Benchmarking round” ? I personally would like to see how e.g. Cake 1.2 and the latest releaeses of all other frameworks have changed regarding Performance during the last 8 months.

    Reply
  60. Pingback: PHP Framework 性能PK at IcyRiver::Enjoy works, Enjoy life!

  61. Thanks

    Thanks for this great article! i was so curious about performance and now i know. Im lucky i am using solar for a while : )

    Thanks

    Reply
  62. Pingback: Paul M. Jones » Blog Archive » Speaking at PHP Works 2007

  63. Ronald

    Nice test! How about a 2008 update? Would be interesting to see whether newer versions mean faster script execution time or perhaps just mean loading more unused stuff into memory.

    Reply
  64. kevin d

    +1 for the 2008 update.. much has happened since these tests and I’ve seen nothing nearly as comprehensive. I’m currently teetering between the quality of code in Solar and the sizable community of Cake. In my rudimentary benchmarks, Solar embarrassed Cake, though I’d like to have some (recent) more reliable data to consider.

    Not that I want to make work for you, but if you find the time, I’d be greatly appreciative.

    Thanks!
    Kev

    Reply
  65. Akash Mehta

    Let’s just put this into perspective, though: ignoring the fact that the app does not yet do anything, the symfony example can handle 60 requests per second. That’s over *five million* per day.

    Reply
  66. Harold

    Has anyone done any “benchmarks” lately? It would be interesting to know how a “hello world” type of test compares to a “real world” test (e.g. with database calls).

    @kevin: Have you tried agavi? I have yet to decide whether to go for solar or agavi. I haven’t really used them on a project yet but based on what’s out there and the docs I have some minor nitpicks.

    Reply
  67. Aldemar

    Would you run the test again? including the new versions =D

    btw, I’m using apc instead of xcache, but for better performance I always open the page once or twice in order that the files get cached w/o fragmentation, it happens that when I run the ab test w/o loading the page before, apc gets a lot of fragmentation in the cache.

    Aldemar

    Reply
  68. Tom

    Coming back to this site later now being on EC2 this is even more valuable…thanks again…but I still want to see a cake 1.2 test =( and I’m surprised Zend is faster, it’s one of the most screwed up frameworks on this planet…but that’s primarily to due to how it’s kept and all the contributions, etc. Would also be interested in Code Igniter results.

    Reply
  69. Pingback: Paul M. Jones » Blog Archive » BREAD, not CRUD

  70. Pingback: Paul M. Jones » Blog Archive » Labor Day Benchmarks

  71. Pingback: Paul M. Jones » Blog Archive » … But Some Suck Less Than Others

  72. Erick

    Such nonsense in these tests. Symfony should be wrapped up and its “developers” sent home. Zend is a bulky steaming pile of poo now. If one must use PHP, use a lightweight elegant system like CodeIgniter, or just get Django from Python.

    Reply
  73. Pingback: TJ Wallace» Blog Archive » Frameworks and Web-Servers Benchmarked

  74. Pingback: Framework PHP, oui mais ! « Me, Myself And I

  75. Pingback: Paul M. Jones » Blog Archive » A Siege On Benchmarks

  76. Pingback: Paul M. Jones » Blog Archive » php|works 2007 Teaser: Framework Benchmarks!

  77. Christian louboutin

    Would you run the test again? including the new versions =D

    btw, I’m using apc instead of xcache, but for better performance I always open the page once or twice in order that the files get cached w/o fragmentation, it happens that when I run the ab test w/o loading the page before, apc gets a lot of fragmentation in the cache.

    Reply
  78. Pingback: Paul M. Jones » Blog Archive » The Future of Zend Framework is Solar

  79. Pingback: Cake vs Solar vs Symfony vs Zend Framework « ?????

  80. Pingback: Hello World?????? « BEAR Blog

  81. Pingback: Benchmarking Web Applications and Frameworks | Pomelicot

  82. Pingback: ¿Es lento PHP? | Programación en Internet

  83. Pingback: ¿Cómo competir con esto? | Programación en Internet

Leave a Reply

Your email address will not be published. Required fields are marked *