Considering Typehints As Communication

Typehints help communicate across time and space, to people who may never meet you or who might not be able to interrogate you about your code, so those people can understand how you expect the code to work.

Adding typehints is a succinct, more-complete form of communication than not-adding them. (It is rare, perhaps impossible, for all communication can be fully complete all the time.)

Further, you don’t know in advance which parts of the codebase are going to last for a long time, and which are going to be replaced in relatively short order. It’s probably better to to add the typehints when you know what they are, rather than to wait and see if you’ll “need” them later.

Typehints can be considered low-cost mistake-proofing against misunderstanding in an obvious place (i.e., where they are used), without having to look elsewhere (“just read the tests!” [groan]).

Solving The “Widget Problem” In ADR

The “widget problem” is when you have several panels or content areas on an HTML page that have different data sources. You might have a main content area, then a calendar off to the side, with perhaps a list of recent news items or blog posts, a todo or reminder widget, and maybe other information panels. The problem is that they each have different data sources, and may not always be displayed in every circumstance — perhaps they are only shown to some users based on their preferences, or under certain conditions.

So how, in Action-Domain-Responder, do we get the “right” data for the set of widgets that are actually going to be displayed? (We’ll presume here that the entire page is being rendered server-side, for delivery as a whole to the client.)

The answer is “the same as with anything else” – we just have more kinds of data to get from the domain. The domain has all the data needed, and the knowledge necessary to figure out which data elements to return.

Let’s start with the Action, which is intentionally very spare: it only collects input, calls the Domain with that input, then invokes the Responder:

<?php
class PageAction
{
    public function __construct(
        PageService $domain,
        PageResponder $responder
    ) {
        $this->domain = $domain;
        $this->responder = $responder;
    }

    public function __invoke(HttpRequest $request)
    {
        $payload = $this->domain->fetchPageData(
            $request->getAttribute('sessionId'),
            $request->getAttribute('pageName')
        );
        return $this->responder->respond($request, $payload);
    }
}

The domain work is where the heavy lifting happens. The example below returns the domain objects and data wrapped in a Domain Payload object.

<?php
class PageService
{
    // presume $userService, $sessionService, and $database
    // dependencies are injected via constructor

    public function fetchPageData($sessionId, $pageName)
    {
        $session = $this->sessionService->resume($sessionId);
        $user = $this->userService->fetch($session->userId);

        // the main page data
        $mainData = $this->fetchMainData($pageName);
        if (! $mainData) {
            return new Payload('NOT_FOUND');
        }

        // an array of widgets to show
        $widgets = [];

        // different users might prefer to see different widgets
        foreach ($user->getWidgetsToShow() as $widgetName) {
            $method = "fetch{$widgetName}Data";
            $widgets[$widgetName] = $this->$method();
        }

        $this->sessionService->commit($session);

        return new Payload('FOUND', [
            'user' => $user,
            'page_name' => $pageName,
            'main_data' => $mainData,
            'widgets' => $widgets
        ]);
    }

    protected function fetchMainData($page_name)
    {
        return $this->database->fetchRow(
            "SELECT * FROM pages WHERE page_name = ? LIMIT 1",
            $page_name
        );
    }

    protected function fetchTodoData() { ... }

    protected function fetchRemindersData() { ... }

    protected function fetchUpcomingEventsData() { ... }

    protected function fetchCalendarData() { ... }

}

Finally, the Responder work becomes as straightforward as: “Is there Todo data to present? Then render the Todo widget via a Todo helper using the Todo data.” It could be as simple as this:

<?php
class PageResponder
{
    // ...

    public function respond(Request $request, Payload $payload)
    {
        if ($payload->getStatus() == 'NOT_FOUND') {
            return new Response(404);
        }

        $output = $payload->getOutput();

        $html = '';
        $html .= $this->renderHeader($output['page_name'];
        $html .= $this->renderNav();
        $html .= $this->renderMain($output['main_data']);

        foreach ($output['widgets'] as $widgetName => $widgetData) {
            $method = "render{$widgetName}Html";
            $html .= $this->$method($widgetData);
        }

        return new Response(200, $html);
    }

    protected function renderHeader($request, $pageName) { ... }

    protected function renderNav($request) { ... }

    protected function renderMain($request, $mainData) { ... }

    protected function renderTodoHtml($request, $widgetData) { ... }

    protected function renderRemindersHtml($request, $widgetData) { ... }

    protected function renderUpcomingEventsHtml($request, $widgetData) { ... }

    protected function renderCalendarHtml($request, $widgetData) { ... }
?>

One alternative here is for some client-side Javascript to make one additional call per widget or panel to retrieve widget-specific data, then render that data on the client. The server-side work becomes less complex (one action per widget, and transform the data to JSON instead of HTML) – but the client-side work becomes more complex, and you have more HTTP calls back-and-forth to build the page.

Avoid Dependency Injection

At least, avoid it when building DDD Aggregates:

Dependency injection of a Repository or a Domain Service into an Aggregate should generally be viewed as harmful. The motivation may be to look up a dependent object instance from inside the Aggregate. The dependent object could be another Aggregate, or a number of them. … Preferably, dependent objects are looked up before an Aggregate command method is invoked, and passed into it.

… Take great care not to add unnecessary overhead that could be easily avoided by using other design principles, such as looking up dependencies before an Aggregate command method is invoked, and passing them into it.

This is only meant to warn against injecting Repositories and Domain Services into Aggregate instances. Of course, dependency injection is still quite suitable for many other design situations. For example, it could be quite useful to inject Repository and Domain Service references into Application Services.

— “Implementing Domain Driven Design”, Vaughn Vernon, p 387.


On a related note, regarding where Entity validation logic goes, we have this …

Validation is a separate concern and should be the responsibility of a validation class, not a domain object.

— ibid., p 208

… and this …

Embedding validation logic inside an Entity give it too many responsibilities. It already has the responsibility to address domain behavior as it maintains its state.

— ibid., p 212

… but then we see this:

How do clients ensure that Entity validation occurs? And where does validation processing begin? One way places a validate() method on all Entities that require validation. … Any Entity can safely have its validate() method invoked. …

However, should Entities actually validate themselves? Having its own validate() method doesn’t mean the Entity itself performs validation. Yet, it does allow the Entity to determine what validates it, relieving clients from that concern:

public class Warble extends Entity {
    ...
    @Override
    public void Validate(ValidationNotificationHandler aHandler) {
        (new WarbleValidator(this, aHandler)).validate();
    }
}

… The Entity needs to know nothing about how it is validated, only that it can be validated. The separate Validator subclass also allows the validation process to change at a diferent pace from the Entity and enables complex validations to be thoroughly tested.

— ibid., p 214-215

It seems like a small step after that to inject a fully-constructed validation object into the Entity at construction time, and have the validate() method call that, instead of creating a new validation object inside the Entity.

Choose Dependency Injection — If You Can

Some people say, “You don’t need to use dependency injection for everything. Sometimes dependency injection is not the best choice.”

It occurs to me that the people who say this are the ones who can’t use it for everything. They say “choose what’s best for your situation”, but their situation precludes the use of dependency injection in the first place.

Anyone who says “X is not always the best choice”, but does not have X as an available option, is being disingenuous. They are not choosing against X based on an examination of the tradeoffs involved. Instead, they are making a virtue out of necessity, then posing as virtuous for not having better choices available.

Dependency injection is, by default and until proven otherwise, the best choice — when you have that choice available to you.

If that choice is not available to you, if you cannot construct an object using any form of dependency injection (constructor injection, setter injection, etc.), then you need to consider if the code in question has been designed poorly.

Atlas.Orm 2.0 Is Now Stable

I am very happy to announce that Atlas, a data-mapper for your persistence layer in PHP, is now stable for production use! There are no changes, other than documentation updates, since the beta release two weeks ago.

You can get Atlas from Packagist via Composer by adding …

    "require": {
        "atlas/orm": "~2.0"
    }

… to your composer.json file.

The updated documentation site is at atlasphp.io (with both 1.x and 2.x documentation).

Submit issues and pull requests as you see fit!

A Few Right Ways, But Infinitely More Wrong Ways

A response to the saying: “There’s no one ‘right’ way to do things. There are different ways of doing something that are ‘right’. So stop criticizing my chosen way of doing things — you cannot prove that it is wrong.”

For any question, there is a certain number of right answers, but an infinite number of wrong ones.

Likewise, there may be more than one right way, but that number is small in comparison to the infinite number of wrong ways.

Each right way is ephemeral and contingent, and has its own tradeoffs.

Each right way is dependent on your current understanding as applied to your current circumstances.

As your understanding changes with experience, and as your circumstances change over time, the way that is thought to right will also change.

Sometimes that means realizing that your earlier understanding was wrong.

The novice says: “My new idea cannot be wrong, because there is no one right way.”

The master asks: “Is it more likely that I have a new way that is better, or a new way that is worse?”

The novice demands “proof” that their way is wrong.

The master asks which ways are “better” and which are “worse”, and picks the best one for the situation.

Sometimes the best way is still “bad”, but better than all the others; the master knows this, and does not defend it as “right.”

Atlas.Orm 2.0.0-beta1 Released

I am very happy to announce that I have just released 2.0.0-beta1 of Atlas, a data mapper for your SQL persistence model. You can get it via Composer as atlas/orm. This new major version requires PHP 7.1 and uses typehints, but best of all, it is backwards compatible with existing 1.x generated Atlas classes!

I.

The original motivation for starting the 2.x series was an edge-case bug. Atlas lets you open and manage transactions across multiple connections simultaneously; different table classes can point to different database connections. Atlas also lets you save an entire Record and all of its related fields with a single call to the persist() Mapper or Transaction method.

However, in 1.x, executing a Transaction::persist() call does not work properly when the relationships are mapped across connections that are not the same as the main Record. This is because the Transaction begins on the main Record connection, but does not have access to the connections for related records, and so cannot track them.

Admittedly, this is an uncommon operation, but ought to be supported properly. The only way to fix it was to break backwards compatibility on the Table and Transaction classes, both on their constructors and on their internal operations, by introducing a new ConnectionManager for them to use for connection and transaction management.

II.

As long as backwards compatibility breaks were going to happen anyway, that created the opportunity to upgrade the package to use PHP 7.1 and typehints. But even with that in mind …

  • You do not need to modify or regenerate any classes generated from 1.x (although if you have overridden class methods in custom classes, you may need to modify that code to add typehints).

  • Atlas 2.x continues to use Aura.Sql and Aura.SqlQuery 2.x, so you do not need to change any queries from Atlas 1.x.

  • You do not need to change any calls to AtlasContainer for setup.

So, the majority of existing code using Atlas 1.x should not have to change at all after upgrading to 2.x.

III.

There are some minor but breaking changes to the return values of some Atlas calls; these are a result of using typehints, especially nullable types.

First, the following methods now return null (instead of false) when they fail:

  • Atlas::fetchRecord()
  • Atlas::fetchRecordBy()
  • Mapper::fetchRecord()
  • Mapper::fetchRecordBy()
  • MapperSelect::fetchRecord()
  • RecordSet::getOneBy()
  • RecordSet::removeOneBy()
  • Table::fetchRow()
  • Table::updateRowPerform()
  • TableSelect::fetchOne()
  • TableSelect::fetchRow()

Checking for loosely false-ish return values will continue to work in 2.x as it did in 1.x, but if you were checking strictly for false you now need to check strictly for null – or switch to loose checking. (Note that values for a missing related record field are still false, not null. That is, a related field value of null still indicates “there was no attempt to fetch a related record,” while false still indicates “there was an attempt to fetch a related record, but it did not exist.”)

Second, the following methods will now always return a RecordSet, even when no records are found. (Previously, they would return an empty array when no records were found.)

  • Atlas::fetchRecordSet()
  • Atlas::fetchRecordSetBy()
  • Mapper::fetchRecordSet()
  • Mapper::fetchRecordSetBy()
  • MapperSelect::fetchRecordSet()

Whereas previously you would check for an empty array or loosely false-ish value as the return value, you should now call isEmpty() on the returned RecordSet.

That is the extent of user-facing backwards compatibility breaks.

IV.

There is one major new piece in 2.x: the table-level ConnectionManager. It is in charge of transaction management for table operations, and allows easy setting of read and write connections to be used for specific tables if desired.

It also allows for on-the-fly replacement of “read” connections with “write” connections. You can specify that this should…

  • never happen (the default)
  • always happen (which is useful in GET-after-POST situations when latency is high for master-slave replication), or
  • happen only when a table has started writing back to the database (which is useful for synchronizing reads with writes while in a transaction)

Other than that, you should not have to deal with the ConnectionManager directly, but it’s good to know what’s going on under the hood.

V.

Although this is a major release, the user-facing backwards-compatibility changes are very limited, so it should be an easy migration from 1.x. There is very little new functionality, so I’m releasing this as a beta instead of an alpha. I have yet to update the official documentation site, but the in-package docs are fully up-to-date.

I can think of no reason to expect significant API changes between now and a stable release, which should arrive in the next couple of weeks if there are no substantial bug reports.

Thanks for reading, and I hope that Atlas can help you out on your next project!

Quality: Program vs Product

I.

Why it is that programmers and their employers have different attitudes toward the quality of a project? Thinking of myself as a programmer, I have sometimes formulated it like this:

  • The programmers who do the work are usually the ones who care more about “quality.” Why?

    • They have a reputation to maintain. Low quality for their kind of work is bad for reputation among their peers. (Note that their peers are not necessarily their employers.)

    • They understand they may be working on the same project later; higher quality means easier work later, although at the expense of (harder? more?) work now.

  • The employers who are paying for the work care much less about “quality.” Why?

    • The reputation of the payer is not dependent on how the work is done, only that the work is done, and in a way that can be presented to customers. Note that the customers are mostly not the programmer’s peers.

    • They have a desire to pay as little as possible in return for as much as possible. “Quality” generally is more costly (both in time and in finances) in earlier stages, when resources are generally lowest or least certain to be forthcoming.

    • As a corollary, since the people paying for the work are not doing the work, it is easier to dismiss concerns about “quality”. Resources conserved earlier (both in time and money) means greater resources available later.

These points are summed up well by Redditor JamesTheHaxor, who says: “Truth is, nobody cares except for us passionate programmers. We can judge a persons skill level based on their code quality. Most clients can’t. They judge a persons skill level by how fast they can get something done for the cheapest price that works to spec. There’s plenty of shoddy coders to fill that market. They always undercut me on freelance sites.”

II.

The problem is that the term “quality” means different things to different people. The in the programmer/employer situation, there are two different definitions of “quality” at play (or two different things that are being paid attention to).

  • The programmer’s “quality” relates to the what he sees and works with regularly and is responsible for over time (i.e., the code itself: the “program”).

  • The employer’s “quality” relates to what he and the customers see and work with regularly and are responsible for over time (i.e., what is produced by running the code: the “product”).

“Quality of program” is not the same thing as “quality of product.” They have different impacts at different times in the project, and have different levels of visibility to different people involved in the project. They do not exist independently of each other, and feed back on each other; change one kind of quality, and the other kind will likewise change.

I think this disconnect also applies to various non-software crafts and trades. You hear about carpenters, plumbers, painters, etc. complaining that they get undercut on prices by low-cost labor who don’t have the same level of quality. And yet the customers who choose the lower-cost option are satisfied with the quality level, given their resource constraints. The programmer-craftsman laments the low quality of the work, but the payer-customer just wants something fixed quickly at a low cost.

Dismissing quality concerns of either kind early on may cause breaks and stoppage when the product is more visible or closer to deadline, thus leading to greater stress and strain to get work done under increasing public scrutiny. The programmer blames the lack of code quality for the troubles, and the employer laments the programmer’s inability to work as quickly as he did earlier in the project.

What’s interesting to me is that the programmer has some idea about the product quality (he has to use the product in some fashion while building it), but the manager/employer/payer has almost no idea about the code quality (they are probably not writing any code). So it’s probably up to the programmer to understand the consequences of program quality in terms of product quality.

“Before” (not “Beyond”) Design Patterns

(N.b.: This has been languishing at the bottom of my blog heap for years; time to get it into the sun.)

The 2013 article Beyond Design Patterns takes the approach of reducing all design patterns to a single pattern: “Abstracting Communication Between ‘Components’.” The author concludes, in part:

Instead of focusing on design patterns, I would suggest focusing on understanding how communication between objects and components happens. How does an object “decide” what to do? How does it communicate that intention to other objects.

Are design patterns useful? Absolutely. But I’ll assert that once you understand OOP and object communication, the patterns will “fall out” of the code you write. They come from writing OOP.

This strikes me as misapplied reduction. Sure, it’s true that, at a lower level or scope, it might be fair to say that “the pattern is ‘communication.’” But it is also true that, at a somewhat higher level or scope, communication between “which things” in “what way” is also fair. So the article might better be titled not “Beyond Design Patterns” but “Beneath Design Patterns” or “Before Design Patterns”. The concepts illustrated are not consequences of or improvements on design patterns; they are priors to design patterns.

The analogy that came to my mind was one of molecules and atoms. An imaginary article on chemistry titled “Beyond Molecules” might thus conclude …

Instead of focusing on molecules, I would suggest focusing on understanding how interaction between atoms happens. How does an atom “decide” what to do? How does it communicate that intention to other atoms?

Are molecules useful? Absolutely. But I’ll assert that once you understand atoms and atomic interaction, the molecules will “fall out” of the formulas you write.

… which is true enough for as far as it goes, but it is also revealed as a rather superficial and mundane observation when presented this way. Atoms are not “beyond” molecules.

So: molecules are a proper unit of understanding at their level or scope. Likewise, design patterns are a proper unit of understanding at their own level. Reducing them further does not show you anything “beyond” – it only shows you “because.”

A “Systems” Addendum To Semantic Versioning

tl;dr: When using semantic versioning, consider not only changes to the public API, but also changes to system requirements.


Semantic Versioning concentrates on the public API signature as the determiner of when to change version numbers. If the public API changes in a backwards-incompatible way, then you have bump the major version number. When I upgrade a dependency to a minor or patch version, I am hypothetically assured that the upgrade will be successfully completed, and that I will not have to change anything about my existing system or codebase.

However, I have come to think that while SemVer’s concentration on the public API is a necessary indicator of the upgradability without other changes, it is not a sufficient indicator. Here’s a real-world upgrade scenario:

  1. Package 1.0.0 is released, dependent on features available in Language 4.0.0.

  2. Package 1.1.0 is released, with a public API identical to 1.0.0, but is now dependent on new features available only in Language 4.1.0.

SemVer is honored here by both the Package and the Langauge. Neither has introduced backwards-incompatible public API changes: the Package API is unchanged, and the Language API adds new backwards-compatible features.

But now the Package internals depend on the new features of the Language. As such, it is not possible for the Package 1.0.0 consumers to upgrade to Package 1.1.0 without also upgrading to Language 4.1.0. If they try to upgrade, the system will break, even though SemVer indicates it should be safe to upgrade.

It looks like there is more to backwards compatibility than just the public API.

II.

Some will argue that when the package requirements are properly specified in a package manifest, the package manager prevent a break from occurring in the above scenario. The package manager, reading the manifest, will note that Language 4.1.0 is not installed, and refuse to upgrade Package from 1.0.0 to 1.1.0.

However, this is beside the point that I am trying to make. Yes, the package manager prevents breaking installations based on a manifest. But that prevention does not mean that the changes introduced in Package 1.1.0 are backwards-compatible with a system running 1.0.0. The 1.1.0 changes require a system modification because of a requirements incompatibility.

So it is true that the package manager provides a defense against breaks, but is also true that the version number of the package does not indicate there is a backwards incompatibility with the existing system. It is the indication of incompatibility (among other things) that makes SemVer valuable.

III.

I opine that requiring a change in the public environment into which a package is installed is just as major an incompatibility as introducing a breaking change to the public API of the package. To cover that case, I offer the following as a draft addendum to the SemVer spec:

  • If the package consumer has to change a publicly-available system resource to upgrade a package, then the package upgrade is not backwards-compatible with the existing system, and the package SHOULD receive a major version bump.

Using “SHOULD” makes this rule somewhat less strict than the MUST of a major version bump when changing the package API. It’s possible that Language 4.0.0 is no longer supported or installed on any of the target systems, in which case it might be reasonable that a Package minor version could require a change to the Language version. But you do need to have a good technical or practical reason (not a marketing reason 😉 to avoid the major version bump when changing the public system requirements for the package.

(You may also wish to review Romantic Versioning for an additional take on Semantic Versioning.)