Atlas.Orm 2.0 Is Now Stable

I am very happy to announce that Atlas, a data-mapper for your persistence layer in PHP, is now stable for production use! There are no changes, other than documentation updates, since the beta release two weeks ago.

You can get Atlas from Packagist via Composer by adding …

    "require": {
        "atlas/orm": "~2.0"
    }

… to your composer.json file.

The updated documentation site is at atlasphp.io (with both 1.x and 2.x documentation).

Submit issues and pull requests as you see fit!

A Few Right Ways, But Infinitely More Wrong Ways

A response to the saying: “There’s no one ‘right’ way to do things. There are different ways of doing something that are ‘right’. So stop criticizing my chosen way of doing things — you cannot prove that it is wrong.”

For any question, there is a certain number of right answers, but an infinite number of wrong ones.

Likewise, there may be more than one right way, but that number is small in comparison to the infinite number of wrong ways.

Each right way is ephemeral and contingent, and has its own tradeoffs.

Each right way is dependent on your current understanding as applied to your current circumstances.

As your understanding changes with experience, and as your circumstances change over time, the way that is thought to right will also change.

Sometimes that means realizing that your earlier understanding was wrong.

The novice says: “My new idea cannot be wrong, because there is no one right way.”

The master asks: “Is it more likely that I have a new way that is better, or a new way that is worse?”

The novice demands “proof” that their way is wrong.

The master asks which ways are “better” and which are “worse”, and picks the best one for the situation.

Sometimes the best way is still “bad”, but better than all the others; the master knows this, and does not defend it as “right.”

Atlas.Orm 2.0.0-beta1 Released

I am very happy to announce that I have just released 2.0.0-beta1 of Atlas, a data mapper for your SQL persistence model. You can get it via Composer as atlas/orm. This new major version requires PHP 7.1 and uses typehints, but best of all, it is backwards compatible with existing 1.x generated Atlas classes!

I.

The original motivation for starting the 2.x series was an edge-case bug. Atlas lets you open and manage transactions across multiple connections simultaneously; different table classes can point to different database connections. Atlas also lets you save an entire Record and all of its related fields with a single call to the persist() Mapper or Transaction method.

However, in 1.x, executing a Transaction::persist() call does not work properly when the relationships are mapped across connections that are not the same as the main Record. This is because the Transaction begins on the main Record connection, but does not have access to the connections for related records, and so cannot track them.

Admittedly, this is an uncommon operation, but ought to be supported properly. The only way to fix it was to break backwards compatibility on the Table and Transaction classes, both on their constructors and on their internal operations, by introducing a new ConnectionManager for them to use for connection and transaction management.

II.

As long as backwards compatibility breaks were going to happen anyway, that created the opportunity to upgrade the package to use PHP 7.1 and typehints. But even with that in mind …

  • You do not need to modify or regenerate any classes generated from 1.x (although if you have overridden class methods in custom classes, you may need to modify that code to add typehints).

  • Atlas 2.x continues to use Aura.Sql and Aura.SqlQuery 2.x, so you do not need to change any queries from Atlas 1.x.

  • You do not need to change any calls to AtlasContainer for setup.

So, the majority of existing code using Atlas 1.x should not have to change at all after upgrading to 2.x.

III.

There are some minor but breaking changes to the return values of some Atlas calls; these are a result of using typehints, especially nullable types.

First, the following methods now return null (instead of false) when they fail:

  • Atlas::fetchRecord()
  • Atlas::fetchRecordBy()
  • Mapper::fetchRecord()
  • Mapper::fetchRecordBy()
  • MapperSelect::fetchRecord()
  • RecordSet::getOneBy()
  • RecordSet::removeOneBy()
  • Table::fetchRow()
  • Table::updateRowPerform()
  • TableSelect::fetchOne()
  • TableSelect::fetchRow()

Checking for loosely false-ish return values will continue to work in 2.x as it did in 1.x, but if you were checking strictly for false you now need to check strictly for null – or switch to loose checking. (Note that values for a missing related record field are still false, not null. That is, a related field value of null still indicates “there was no attempt to fetch a related record,” while false still indicates “there was an attempt to fetch a related record, but it did not exist.”)

Second, the following methods will now always return a RecordSet, even when no records are found. (Previously, they would return an empty array when no records were found.)

  • Atlas::fetchRecordSet()
  • Atlas::fetchRecordSetBy()
  • Mapper::fetchRecordSet()
  • Mapper::fetchRecordSetBy()
  • MapperSelect::fetchRecordSet()

Whereas previously you would check for an empty array or loosely false-ish value as the return value, you should now call isEmpty() on the returned RecordSet.

That is the extent of user-facing backwards compatibility breaks.

IV.

There is one major new piece in 2.x: the table-level ConnectionManager. It is in charge of transaction management for table operations, and allows easy setting of read and write connections to be used for specific tables if desired.

It also allows for on-the-fly replacement of “read” connections with “write” connections. You can specify that this should…

  • never happen (the default)
  • always happen (which is useful in GET-after-POST situations when latency is high for master-slave replication), or
  • happen only when a table has started writing back to the database (which is useful for synchronizing reads with writes while in a transaction)

Other than that, you should not have to deal with the ConnectionManager directly, but it’s good to know what’s going on under the hood.

V.

Although this is a major release, the user-facing backwards-compatibility changes are very limited, so it should be an easy migration from 1.x. There is very little new functionality, so I’m releasing this as a beta instead of an alpha. I have yet to update the official documentation site, but the in-package docs are fully up-to-date.

I can think of no reason to expect significant API changes between now and a stable release, which should arrive in the next couple of weeks if there are no substantial bug reports.

Thanks for reading, and I hope that Atlas can help you out on your next project!

“Before” (not “Beyond”) Design Patterns

(N.b.: This has been languishing at the bottom of my blog heap for years; time to get it into the sun.)

The 2013 article Beyond Design Patterns takes the approach of reducing all design patterns to a single pattern: “Abstracting Communication Between ‘Components’.” The author concludes, in part:

Instead of focusing on design patterns, I would suggest focusing on understanding how communication between objects and components happens. How does an object “decide” what to do? How does it communicate that intention to other objects.

Are design patterns useful? Absolutely. But I’ll assert that once you understand OOP and object communication, the patterns will “fall out” of the code you write. They come from writing OOP.

This strikes me as misapplied reduction. Sure, it’s true that, at a lower level or scope, it might be fair to say that “the pattern is ‘communication.’” But it is also true that, at a somewhat higher level or scope, communication between “which things” in “what way” is also fair. So the article might better be titled not “Beyond Design Patterns” but “Beneath Design Patterns” or “Before Design Patterns”. The concepts illustrated are not consequences of or improvements on design patterns; they are priors to design patterns.

The analogy that came to my mind was one of molecules and atoms. An imaginary article on chemistry titled “Beyond Molecules” might thus conclude …

Instead of focusing on molecules, I would suggest focusing on understanding how interaction between atoms happens. How does an atom “decide” what to do? How does it communicate that intention to other atoms?

Are molecules useful? Absolutely. But I’ll assert that once you understand atoms and atomic interaction, the molecules will “fall out” of the formulas you write.

… which is true enough for as far as it goes, but it is also revealed as a rather superficial and mundane observation when presented this way. Atoms are not “beyond” molecules.

So: molecules are a proper unit of understanding at their level or scope. Likewise, design patterns are a proper unit of understanding at their own level. Reducing them further does not show you anything “beyond” – it only shows you “because.”

A “Systems” Addendum To Semantic Versioning

tl;dr: When using semantic versioning, consider not only changes to the public API, but also changes to system requirements.


Semantic Versioning concentrates on the public API signature as the determiner of when to change version numbers. If the public API changes in a backwards-incompatible way, then you have bump the major version number. When I upgrade a dependency to a minor or patch version, I am hypothetically assured that the upgrade will be successfully completed, and that I will not have to change anything about my existing system or codebase.

However, I have come to think that while SemVer’s concentration on the public API is a necessary indicator of the upgradability without other changes, it is not a sufficient indicator. Here’s a real-world upgrade scenario:

  1. Package 1.0.0 is released, dependent on features available in Language 4.0.0.

  2. Package 1.1.0 is released, with a public API identical to 1.0.0, but is now dependent on new features available only in Language 4.1.0.

SemVer is honored here by both the Package and the Langauge. Neither has introduced backwards-incompatible public API changes: the Package API is unchanged, and the Language API adds new backwards-compatible features.

But now the Package internals depend on the new features of the Language. As such, it is not possible for the Package 1.0.0 consumers to upgrade to Package 1.1.0 without also upgrading to Language 4.1.0. If they try to upgrade, the system will break, even though SemVer indicates it should be safe to upgrade.

It looks like there is more to backwards compatibility than just the public API.

II.

Some will argue that when the package requirements are properly specified in a package manifest, the package manager prevent a break from occurring in the above scenario. The package manager, reading the manifest, will note that Language 4.1.0 is not installed, and refuse to upgrade Package from 1.0.0 to 1.1.0.

However, this is beside the point that I am trying to make. Yes, the package manager prevents breaking installations based on a manifest. But that prevention does not mean that the changes introduced in Package 1.1.0 are backwards-compatible with a system running 1.0.0. The 1.1.0 changes require a system modification because of a requirements incompatibility.

So it is true that the package manager provides a defense against breaks, but is also true that the version number of the package does not indicate there is a backwards incompatibility with the existing system. It is the indication of incompatibility (among other things) that makes SemVer valuable.

III.

I opine that requiring a change in the public environment into which a package is installed is just as major an incompatibility as introducing a breaking change to the public API of the package. To cover that case, I offer the following as a draft addendum to the SemVer spec:

  • If the package consumer has to change a publicly-available system resource to upgrade a package, then the package upgrade is not backwards-compatible with the existing system, and the package SHOULD receive a major version bump.

Using “SHOULD” makes this rule somewhat less strict than the MUST of a major version bump when changing the package API. It’s possible that Language 4.0.0 is no longer supported or installed on any of the target systems, in which case it might be reasonable that a Package minor version could require a change to the Language version. But you do need to have a good technical or practical reason (not a marketing reason 😉 to avoid the major version bump when changing the public system requirements for the package.

(You may also wish to review Romantic Versioning for an additional take on Semantic Versioning.)

Hacking, Refactoring, Rewriting, and Technical Debt

(Not so much “lessons” here, as “observations and recollections.”)

I.

From ESR, How To Learn Hacking:

  • Hacking is done on open source. Today, hacking skills are the individual micro-level of what is called “open source development” at the social macro-level. [2] A programmer working in the hacking style expects and readily uses peer review of source code by others to supplement and amplify his or her individual ability.
  • Hacking is lightweight and exploratory. Rigid procedures and elaborate a-priori specifications have no place in hacking; instead, the tendency is try-it-and-find-out with a rapid release tempo.
  • Hacking places a high value on modularity and reuse. In the hacking style, you try hard never to write a piece of code that can only be used once. You bias towards making general tools or libraries that can be specialized into what you want by freezing some arguments/variables or supplying a context.
  • Hacking favors scrap-and-rebuild over patch-and-extend. An essential part of hacking is ruthlessly throwing away code that has become overcomplicated or crufty, no matter how much time you have invested in it.

This strikes me as the opposite of, or at best orthogonal to, the practice of refactoring a large and long-existing codebase, especially that last. “Scrap and rebuild” is essentially “rewrite from scratch” and that is almost always a losing proposition in terms of time and budget.

But then, the description also says hacking is “lightweight, exploratory, modular” — so perhaps it is possible to hack at components within a larger or critical system, and then refactor the system using that result. Cf. Hacking and Refactoring, also from ESR.

II.

I used to work for a consulting company with One Big Client (and a few smaller ones). This One Big Client generated a staggering amount of money; its business model was such that even a few minutes of downtime would result in large losses (technically “lost revenue” but the effect is the same).

The consultancy’s director of development for the One Big Company didn’t “believe” in technical debt. Or rather, he thought technical debt was a non-issue. He would write, or demand writing, some of the most horrific code, as long as it was written *quickly*, because the time constraints for the One Big Client were so overwhelming. There was no refactoring; he and his team would iterate fast, then wipe out the entire codebase and start all over again on a regular basis. (By “regular” I mean every 3-6 months). Any technical debt that was incurred was tiny in comparison to the monetary returns — although it did take a toll on the programmers themselves.

Given ESR above, that’s clearly a “hacking” approach to a critical codebase. Also, it’s clear that *sometimes* a full-on rewrite might be the right thing to do, provided the right context. But I’m betting the vast majority of developers (I’m talking “five nines” here) are not operating in the context of that One Big Client and that particular consultancy. These were “skilled developers knowingly writing low-quality code” — not to be confused with “*un*skilled developers *un*knowingly writing low-quality code.” (Cf. Mehdi Khalili: “On Bad Code.”)

How Terrible Code Gets Written By Perfectly Sane People

From Christian Maioli Mackeprang, the following:

Legacy code can be nasty, but I’ve been programming for 15 years and only a couple of times had I seen something like this. The authors had created their own framework, and it was a perfect storm of anti-patterns. …

And yet, it was not the code’s dismal quality that piqued my interest and led me to write this article. What I discovered after some months working there, was that the authors were actually an experienced group of senior developers with good technical skills. What could lead a team of competent developers to produce and actually deliver something like this?

I don’t necessarily buy all of these, but they’re worth thinking about:

  • Giving excessive importance to estimates: “Your developers might choose to over-promise instead, and then skip important work such as thinking about architectural problems, or how to automate tasks, in order to meet an unrealistic deadline.”

  • Giving no importance to project knowledge: “If you’re in a large project and there are modules for which you have no expert, no-one to ask about, that’s a big red flag.”

  • Focusing on poor metrics such as “issues closed” or “commits per day”: “When a measure becomes a target, it ceases to be a good measure. (Goodhart’s law)”

  • Assuming that good process fixes bad people: “[Fix your hiring.] Talent makes up for any other inefficiency in your team.”

  • Ignoring proven practices such as code reviews and unit testing: “Refraining from using tools such as modern IDEs, version control and code inspection will set you back tremendously.”

  • Hiring developers with no “people” skills: “It doesn’t matter that the other guy might be ten thousand miles away. One Skype call can turn a long coding marathon into a five-minute fix.”

Conclusion? “What you can’t assume is that just because you’ve signed up to apply Agile or some other tool, that nothing else matters and things will sort themselves out.”

Slim and Action-Domain-Responder

I’ve had a warm place in my heart for Slim for a long time, and especially so since recognizing the Action-Domain-Responder pattern. In this post, I’ll show how to refactor the Slim tutorial application to ADR.

One nice thing about Slim (and most other HTTP user interface frameworks) is that they are already “action” oriented. That is, their routers do not presume a controller class with many action methods. Instead, they presume an action closure or a single-action invokable class. So the Action part of Action-Domain-Responder already exists for Slim. All that is needed is to pull extraneous bits out of the Actions, to more clearly separate their behaviors from Domain and the Responder behaviors.

I.

Let’s begin by extracting the Domain logic. In the original tutorial, the Actions use two data-source mappers directly, and embed some business logic as well. We can create a Service Layer class called TicketService and move those operations from the Actions into the Domain. Doing so gives us this class:

<?php
class TicketService
{
    protected $ticket_mapper;
    protected $component_mapper;

    public function __construct(
        TicketMapper $ticket_mapper,
        ComponentMapper $component_mapper
    ) {
        $this->ticket_mapper = $ticket_mapper;
        $this->component_mapper = $component_mapper;
    }

    public function getTickets()
    {
        return $this->ticket_mapper->getTickets();
    }

    public function getComponents()
    {
        return $this->component_mapper->getComponents();
    }

    public function getTicketById($ticket_id)
    {
        $ticket_id = (int) $ticket_id;
        return $this->ticket_mapper->getTicketById($ticket_id);
    }

    public function createTicket($data)
    {
        $component_id = (int) $data['component'];
        $component = $this->component_mapper->getComponentById($component_id);

        $ticket_data = [];
        $ticket_data['title'] = filter_var(
            $data['title'],
            FILTER_SANITIZE_STRING
        );
        $ticket_data['description'] = filter_var(
            $data['description'],
            FILTER_SANITIZE_STRING
        );
        $ticket_data['component'] = $component->getName();

        $ticket = new TicketEntity($ticket_data);
        $this->ticket_mapper->save($ticket);
        return $ticket;
    }
}
?>

We create a container object for it in index.php like so:

<?php
$container['ticket_service'] = function ($c) {
    return new TicketService(
        new TicketMapper($c['db']),
        new ComponentMapper($c['db'])
    );
};
?>

And now the Actions can use the TicketService instead of performing domain logic directly:

<?php
$app->get('/tickets', function (Request $request, Response $response) {
    $this->logger->addInfo("Ticket list");
    $tickets = $this->ticket_service->getTickets();
    $response = $this->view->render(
        $response,
        "tickets.phtml",
        ["tickets" => $tickets, "router" => $this->router]
    );
    return $response;
});

$app->get('/ticket/new', function (Request $request, Response $response) {
    $components = $this->ticket_service->getComponents();
    $response = $this->view->render(
        $response,
        "ticketadd.phtml",
        ["components" => $components]
    );
    return $response;
});

$app->post('/ticket/new', function (Request $request, Response $response) {
    $data = $request->getParsedBody();
    $this->ticket_service->createTicket($data);
    $response = $response->withRedirect("/tickets");
    return $response;
});

$app->get('/ticket/{id}', function (Request $request, Response $response, $args) {
    $ticket = $this->ticket_service->getTicketById($args['id']);
    $response = $this->view->render(
        $response,
        "ticketdetail.phtml",
        ["ticket" => $ticket]
    );
    return $response;
})->setName('ticket-detail');
?>

One benefit here is that we can now test the domain activities separately from the actions. We can begin to do something more like integration testing, even unit testing, instead of end-to-end system testing.

II.

In the case of the tutorial application, the presentation work is so straightforward as to not require a separate Responder for each action. A relaxed variation of a Responder layer is perfectly suitable in this simple case, one where each Action uses a different method on a common Responder.

Extracting the presentation work to a separate Responder, so that response-building is completely removed from the Action, looks like this:

<?php

use Psr\Http\Message\ResponseInterface as Response;
use Slim\Views\PhpRenderer;

class TicketResponder
{
    protected $view;

    public function __construct(PhpRenderer $view)
    {
        $this->view = $view;
    }

    public function index(Response $response, array $data)
    {
        return $this->view->render(
            $response,
            "tickets.phtml",
            $data
        );
    }

    public function detail(Response $response, array $data)
    {
        return $this->view->render(
            $response,
            "ticketdetail.phtml",
            $data
        );
    }

    public function add(Response $response, array $data)
    {
        return $this->view->render(
            $response,
            "ticketadd.phtml",
            $data
        );
    }

    public function create(Response $response)
    {
        return $response->withRedirect("/tickets");
    }
}
?>

We can then add the TicketResponder object to the container in index.php:

<?php
$container['ticket_responder'] = function ($c) {
    return new TicketResponder($c['view']);
};
?>

And finally we can refer to the Responder, instead of just the template system, in the Actions:

<?php
$app->get('/tickets', function (Request $request, Response $response) {
    $this->logger->addInfo("Ticket list");
    $tickets = $this->ticket_service->getTickets();
    return $this->ticket_responder->index(
        $response,
        ["tickets" => $tickets, "router" => $this->router]
    );
});

$app->get('/ticket/new', function (Request $request, Response $response) {
    $components = $this->ticket_service->getComponents();
    return $this->ticket_responder->add(
        $response,
        ["components" => $components]
    );
});

$app->post('/ticket/new', function (Request $request, Response $response) {
    $data = $request->getParsedBody();
    $this->ticket_service->createTicket($data);
    return $this->ticket_responder->create($response);
});

$app->get('/ticket/{id}', function (Request $request, Response $response, $args) {
    $ticket = $this->ticket_service->getTicketById($args['id']);
    return $this->ticket_responder->detail(
        $response,
        ["ticket" => $ticket]
    );
})->setName('ticket-detail');
?>

Now we can test the response-building work separately from the domain work.

Some notes:

Putting all the response-building in a single class with multiple methods, especially for simple cases like this tutorial, is fine to start with. For ADR, is not strictly necessary to have one Responder for each Action. What is necessary is to extract the response-building concerns out of the Action.

But as the presentation logic complexity increases (content-type negotiation? status headers? etc.), and as dependencies become different for each kind of response being built, you will want to have a Responder for each Action.

Alternatively, you might stick with a single Responder, but reduce its interface to a single method. In that case, you may find that using a Domain Payload (instead of “naked” domain results) has some significant benefits.

III.

At this point, the Slim tutorial application has been converted to ADR. We have separated the domain logic to a TicketService, and the presentation logic to a TicketResponder. And it’s easy to see how each Action does pretty much the same thing:

  • Marshals input and passes it into the Domain
  • Gets back a result from the Domain and passes it to the Responder
  • Invokes the Responder so it can build and return the Response

Now, for a simple case like this, using ADR (or even webbishy MVC) might seem like overkill. But simple cases become complex quickly, and this simple case shows how the ADR separation-of-concerns can be applied as a Slim-based application increases in complexity.

Why MVC doesn’t fit the web

[MVC is] a particular way to break up the responsibilities of parts of a graphical user interface application. One of the prototypical examples is a CAD application: models are the objects being drawn, in the abstract: models of mechanical parts, architectural elevations, whatever the subject of the particular application and use is. The “Views” are windows, rendering a particular view of that object. There might be several views of a three-dimensional part from different angles while the user is working. What’s left is the controller, which is a central place to collect actions the user is performing: key input, the mouse clicks, commands entered.

The responsibility goes something like “controller updates model, model signals that it’s been updated, view re-renders”.

This leaves the model relatively unencumbered by the design of whatever system it’s being displayed on, and lets the part of the software revolving around the concepts the model involves stay relatively pure in that domain. Measurements of parts in millimeters, not pixels; cylinders and cogs, rather than lines and z-buffers for display.

The View stays unidirectional: it gets the signal to update, it reads the state from the model and displays the updated view.

The controller even is pretty disciplined and takes input and makes it into definite commands and updates to the models.

Now if you’re wondering how this fits into a web server, you’re probably wondering the same thing I wondered for a long time. The pattern doesn’t fit.

Read the rest at Why MVC doesn’t fit the web.

The “Micro” Framework As “User Interface” Framework

(The following is more exploratory than prescriptive. My thoughts on this topic are incomplete and in-progress. Please try to treat it accordingly.)


tl;dr: “Micro” frameworks are better described as “user interface” frameworks; perhaps there should be corollary “infrastructure” frameworks; consider using two frameworks and/or two containers, one for the user interface and a separate one for the core application.

I.

When we talk about “full stack” frameworks, we mean something that incorporates tools for every part of a server-side application:

  • incoming-request handling through a front controller or router
  • dispatching routes to a page controller or action (typically a class constructed through a DI container)
  • database, email, logging, queuing, sessions, HTTP client, etc.; these are used in a service, model, controller, or action to support business logic
  • outgoing-response creation and sending, often combined with a template system

Examples in PHP include Cake, CodeIgniter, Fuel, Kohana, Laravel, Opulence, Symfony, Yii, Zend Framework, and too many others to count.

When we talk about “micro” frameworks, we mean something that concentrates primarily on the request-handling and response-building parts of a server-side application, and leaves everything else out:

  • incoming-request handling through a front controller or router
  • dispatching routes to a page controller or action (typically a closure that receives a service locator, though sometimes a class constructed through a DI container)
  • outgoing-response creation and sending, often combined with a template system

Slim is the canonical PHP example here; others include Equip, Lumen, Silex, Radar, Zend Expressive, and (again) too many others to count.

II.

I have asserted elsewhere that, in an over-the-network HTTP-oriented server-side application, the View (of Model-View-Controller) is not the template. The View is the entire HTTP response: not just the HTML or the body, but the status line, the headers, all cookies, and so on. That is, the full HTTP response is the presentation; it is the output of the server-side application.

If the HTTP response is the output, then what is the user input into the server-side application? It is not the form being submitted by the user, per se. It is not a key-press or button-click or mouse-movement. The input is the entire HTTP request, as sent by the client to the server.

That means the user interface for a server-side application is not “the HTML being viewed on the screen.” The user interface is the HTTP request and response.

Now, for the big jump:

If the user interface is the request (as input), and the response (as output), that means micro-frameworks are not so much “micro” frameworks, as they are “user interface” frameworks.

Consider that for a moment before continuing.

III.

If it is true that “micro” frameworks are more accurately described as “user interface” frameworks, it opens some interesting avenues of thought.

For one: it means that full-stack frameworks combine user interface concerns with other concerns (notably but not exclusively infrastructure concerns).

For another: perhaps in addition to user interface (“micro”) frameworks, it would be useful to have something like a separate “infrastructure” framework. It might not even be a framework proper; it might only be a container for shared services that have no relation to any user interface concerns. (I can imagine extracting such a toolkit collection from any of the full-stack frameworks, to be offered on its own.)

IV.

Speaking of containers:

It is often the case, even with user interface (“micro”) frameworks, that developers will store their infrastructure and services objects in the framework-provided container. Or, that they will build “providers” or “configuration” for the framework-provided container, so that the container can create and retain the infrastructure and services objects.

But if it is true that the “micro” framework is a “user interface” system, why would we manage infrastructure and domain objects through a user interface container? (This a variation on “Why is your application/domain/business logic in your user interface controller?”)

Perhaps it would be better for the infrastructure and/or domain objects to be managed through a container of their own. Then that completely separate container can be injected into any user interface container. The benefit here is that the infrastructure and/or domain objects are now fully separated from the user interface layer, and can be used with any user interface framework, even one that uses a completely different container for its own user interface purposes. (This is a variation on “composition to reduce coupling.”)

V.

To conclude:

It might be worth considering the use of two frameworks in server-side applications, each with its own container: one for the user interface concerns, and a separate one for infrastructure/domain concerns. (The latter might be composed into any user interface framework, whether one for the web or one for the command line.)

And, given that lots of application work starts mostly as infrastructure coordination, with little domain logic, having a second framework separated from the user interface makes it easier to put a boundary around the infrastructure-based application code, separating it from the user interface code. The work on each can proceed in parallel; the user interface coders can mock the results from the application layer, and build everything out on their own path, while the application coders don’t need to worry too much about presentation concerns (whether HTTP or CLI based).


Unlike some other people, I am not my code, or my books, or my blog posts. Given the tentative nature of the above essay, review and criticism (especially substantial criticism) are vital. It won’t hurt me in the least bit; indeed, I encourage it.

Thanks for reading.