Paul M. Jones

Don't listen to the crowd, they say "jump."

Quality: Program vs Product

I.

Why it is that programmers and their employers have different attitudes toward the quality of a project? Thinking of myself as a programmer, I have sometimes formulated it like this:

  • The programmers who do the work are usually the ones who care more about "quality." Why?

    • They have a reputation to maintain. Low quality for their kind of work is bad for reputation among their peers. (Note that their peers are not necessarily their employers.)

    • They understand they may be working on the same project later; higher quality means easier work later, although at the expense of (harder? more?) work now.

  • The employers who are paying for the work care much less about "quality." Why?

    • The reputation of the payer is not dependent on how the work is done, only that the work is done, and in a way that can be presented to customers. Note that the customers are mostly not the programmer’s peers.

    • They have a desire to pay as little as possible in return for as much as possible. “Quality” generally is more costly (both in time and in finances) in earlier stages, when resources are generally lowest or least certain to be forthcoming.

    • As a corollary, since the people paying for the work are not doing the work, it is easier to dismiss concerns about “quality”. Resources conserved earlier (both in time and money) means greater resources available later.

These points are summed up well by Redditor JamesTheHaxor, who says: “Truth is, nobody cares except for us passionate programmers. We can judge a persons skill level based on their code quality. Most clients can’t. They judge a persons skill level by how fast they can get something done for the cheapest price that works to spec. There’s plenty of shoddy coders to fill that market. They always undercut me on freelance sites.”

II.

The problem is that the term "quality" means different things to different people. The in the programmer/employer situation, there are two different definitions of “quality” at play (or two different things that are being paid attention to).

  • The programmer’s “quality” relates to the what he sees and works with regularly and is responsible for over time (i.e., the code itself: the “program”).

  • The employer’s “quality” relates to what he and the customers see and work with regularly and are responsible for over time (i.e., what is produced by running the code: the “product”).

“Quality of program” is not the same thing as “quality of product.” They have different impacts at different times in the project, and have different levels of visibility to different people involved in the project. They do not exist independently of each other, and feed back on each other; change one kind of quality, and the other kind will likewise change.

I think this disconnect also applies to various non-software crafts and trades. You hear about carpenters, plumbers, painters, etc. complaining that they get undercut on prices by low-cost labor who don’t have the same level of quality. And yet the customers who choose the lower-cost option are satisfied with the quality level, given their resource constraints. The programmer-craftsman laments the low quality of the work, but the payer-customer just wants something fixed quickly at a low cost.

Dismissing quality concerns of either kind early on may cause breaks and stoppage when the product is more visible or closer to deadline, thus leading to greater stress and strain to get work done under increasing public scrutiny. The programmer blames the lack of code quality for the troubles, and the employer laments the programmer’s inability to work as quickly as he did earlier in the project.

What’s interesting to me is that the programmer has some idea about the product quality (he has to use the product in some fashion while building it), but the manager/employer/payer has almost no idea about the code quality (they are probably not writing any code). So it’s probably up to the programmer to understand the consequences of program quality in terms of product quality.


So Much For All That Global Warming

The world has warmed more slowly than had been forecast by computer models, which were "on the hot side" and overstated the impact of emissions, a new study has found. Its projections suggest that the world has a better chance than previously claimed of meeting the goal set by the Paris agreement on climate change to limit warming to 1.5C above pre-industrial levels.

Source: We were wrong -- worst effects of climate change can be avoided, say experts | News | The Times & The Sunday Times


"Before" (not "Beyond") Design Patterns

(N.b.: This has been languishing at the bottom of my blog heap for years; time to get it into the sun.)

The 2013 article Beyond Design Patterns takes the approach of reducing all design patterns to a single pattern: “Abstracting Communication Between ‘Components’.” The author concludes, in part:

Instead of focusing on design patterns, I would suggest focusing on understanding how communication between objects and components happens. How does an object “decide” what to do? How does it communicate that intention to other objects.

Are design patterns useful? Absolutely. But I’ll assert that once you understand OOP and object communication, the patterns will “fall out” of the code you write. They come from writing OOP.

This strikes me as misapplied reduction. Sure, it’s true that, at a lower level or scope, it might be fair to say that “the pattern is ‘communication.’” But it is also true that, at a somewhat higher level or scope, communication between “which things” in “what way” is also fair. So the article might better be titled not “Beyond Design Patterns” but “Beneath Design Patterns” or “Before Design Patterns”. The concepts illustrated are not consequences of or improvements on design patterns; they are priors to design patterns.

The analogy that came to my mind was one of molecules and atoms. An imaginary article on chemistry titled “Beyond Molecules” might thus conclude ...

Instead of focusing on molecules, I would suggest focusing on understanding how interaction between atoms happens. How does an atom “decide” what to do? How does it communicate that intention to other atoms?

Are molecules useful? Absolutely. But I’ll assert that once you understand atoms and atomic interaction, the molecules will “fall out” of the formulas you write.

... which is true enough for as far as it goes, but it is also revealed as a rather superficial and mundane observation when presented this way. Atoms are not "beyond" molecules.

So: molecules are a proper unit of understanding at their level or scope. Likewise, design patterns are a proper unit of understanding at their own level. Reducing them further does not show you anything “beyond” – it only shows you “because.”


A "Systems" Addendum To Semantic Versioning

tl;dr: When using semantic versioning, consider not only changes to the public API, but also changes to system requirements.


Semantic Versioning concentrates on the public API signature as the determiner of when to change version numbers. If the public API changes in a backwards-incompatible way, then you have bump the major version number. When I upgrade a dependency to a minor or patch version, I am hypothetically assured that the upgrade will be successfully completed, and that I will not have to change anything about my existing system or codebase.

However, I have come to think that while SemVer’s concentration on the public API is a necessary indicator of the upgradability without other changes, it is not a sufficient indicator. Here’s a real-world upgrade scenario:

  1. Package 1.0.0 is released, dependent on features available in Language 4.0.0.

  2. Package 1.1.0 is released, with a public API identical to 1.0.0, but is now dependent on new features available only in Language 4.1.0.

SemVer is honored here by both the Package and the Langauge. Neither has introduced backwards-incompatible public API changes: the Package API is unchanged, and the Language API adds new backwards-compatible features.

But now the Package internals depend on the new features of the Language. As such, it is not possible for the Package 1.0.0 consumers to upgrade to Package 1.1.0 without also upgrading to Language 4.1.0. If they try to upgrade, the system will break, even though SemVer indicates it should be safe to upgrade.

It looks like there is more to backwards compatibility than just the public API.

II.

Some will argue that when the package requirements are properly specified in a package manifest, the package manager prevent a break from occurring in the above scenario. The package manager, reading the manifest, will note that Language 4.1.0 is not installed, and refuse to upgrade Package from 1.0.0 to 1.1.0.

However, this is beside the point that I am trying to make. Yes, the package manager prevents breaking installations based on a manifest. But that prevention does not mean that the changes introduced in Package 1.1.0 are backwards-compatible with a system running 1.0.0. The 1.1.0 changes require a system modification because of a requirements incompatibility.

So it is true that the package manager provides a defense against breaks, but is also true that the version number of the package does not indicate there is a backwards incompatibility with the existing system. It is the indication of incompatibility (among other things) that makes SemVer valuable.

III.

I opine that requiring a change in the public environment into which a package is installed is just as major an incompatibility as introducing a breaking change to the public API of the package. To cover that case, I offer the following as a draft addendum to the SemVer spec:

  • If the package consumer has to change a publicly-available system resource to upgrade a package, then the package upgrade is not backwards-compatible with the existing system, and the package SHOULD receive a major version bump.

Using “SHOULD” makes this rule somewhat less strict than the MUST of a major version bump when changing the package API. It’s possible that Language 4.0.0 is no longer supported or installed on any of the target systems, in which case it might be reasonable that a Package minor version could require a change to the Language version. But you do need to have a good technical or practical reason (not a marketing reason ;-) to avoid the major version bump when changing the public system requirements for the package.

(You may also wish to review Romantic Versioning for an additional take on Semantic Versioning.)


Hacking, Refactoring, Rewriting, and Technical Debt

(Not so much "lessons" here, as "observations and recollections.")

I.

From ESR, How To Learn Hacking:

  • Hacking is done on open source. Today, hacking skills are the individual micro-level of what is called “open source development” at the social macro-level. [2] A programmer working in the hacking style expects and readily uses peer review of source code by others to supplement and amplify his or her individual ability.
  • Hacking is lightweight and exploratory. Rigid procedures and elaborate a-priori specifications have no place in hacking; instead, the tendency is try-it-and-find-out with a rapid release tempo.
  • Hacking places a high value on modularity and reuse. In the hacking style, you try hard never to write a piece of code that can only be used once. You bias towards making general tools or libraries that can be specialized into what you want by freezing some arguments/variables or supplying a context.
  • Hacking favors scrap-and-rebuild over patch-and-extend. An essential part of hacking is ruthlessly throwing away code that has become overcomplicated or crufty, no matter how much time you have invested in it.

This strikes me as the opposite of, or at best orthogonal to, the practice of refactoring a large and long-existing codebase, especially that last. "Scrap and rebuild" is essentially "rewrite from scratch" and that is almost always a losing proposition in terms of time and budget.

But then, the description also says hacking is "lightweight, exploratory, modular" -- so perhaps it is possible to hack at components within a larger or critical system, and then refactor the system using that result. Cf. Hacking and Refactoring, also from ESR.

II.

I used to work for a consulting company with One Big Client (and a few smaller ones). This One Big Client generated a staggering amount of money; its business model was such that even a few minutes of downtime would result in large losses (technically "lost revenue" but the effect is the same).

The consultancy's director of development for the One Big Company didn't "believe" in technical debt. Or rather, he thought technical debt was a non-issue. He would write, or demand writing, some of the most horrific code, as long as it was written *quickly*, because the time constraints for the One Big Client were so overwhelming. There was no refactoring; he and his team would iterate fast, then wipe out the entire codebase and start all over again on a regular basis. (By "regular" I mean every 3-6 months). Any technical debt that was incurred was tiny in comparison to the monetary returns -- although it did take a toll on the programmers themselves.

Given ESR above, that's clearly a "hacking" approach to a critical codebase. Also, it's clear that *sometimes* a full-on rewrite might be the right thing to do, provided the right context. But I'm betting the vast majority of developers (I'm talking "five nines" here) are not operating in the context of that One Big Client and that particular consultancy. These were "skilled developers knowingly writing low-quality code" -- not to be confused with "*un*skilled developers *un*knowingly writing low-quality code." (Cf. Mehdi Khalili: "On Bad Code.")


How Terrible Code Gets Written By Perfectly Sane People

From Christian Maioli Mackeprang, the following:

Legacy code can be nasty, but I’ve been programming for 15 years and only a couple of times had I seen something like this. The authors had created their own framework, and it was a perfect storm of anti-patterns. …

And yet, it was not the code’s dismal quality that piqued my interest and led me to write this article. What I discovered after some months working there, was that the authors were actually an experienced group of senior developers with good technical skills. What could lead a team of competent developers to produce and actually deliver something like this?

I don’t necessarily buy all of these, but they’re worth thinking about:

  • Giving excessive importance to estimates: “Your developers might choose to over-promise instead, and then skip important work such as thinking about architectural problems, or how to automate tasks, in order to meet an unrealistic deadline.”

  • Giving no importance to project knowledge: “If you’re in a large project and there are modules for which you have no expert, no-one to ask about, that’s a big red flag.”

  • Focusing on poor metrics such as “issues closed” or “commits per day”: “When a measure becomes a target, it ceases to be a good measure. (Goodhart’s law)”

  • Assuming that good process fixes bad people: “[Fix your hiring.] Talent makes up for any other inefficiency in your team.”

  • Ignoring proven practices such as code reviews and unit testing: “Refraining from using tools such as modern IDEs, version control and code inspection will set you back tremendously.”

  • Hiring developers with no “people” skills: “It doesn’t matter that the other guy might be ten thousand miles away. One Skype call can turn a long coding marathon into a five-minute fix.”

Conclusion? “What you can’t assume is that just because you’ve signed up to apply Agile or some other tool, that nothing else matters and things will sort themselves out.”



Should the race go to the swift? Or should there be no race?

The race is not to the swift or the battle to the strong, nor does food come to the wise or wealth to the brilliant or favor to the learned; but time and chance happen to them all.

http://biblehub.com/ecclesiastes/9-11.htm

Some read this, and think:

  • The race should be to the swift.
  • The battle should be to the strong.
  • The wise should receive food.
  • The brilliant should be rich.
  • The learned should draw favor.

But others think:

  • There should be no swift and no sluggish; there should be no race.
  • There should be no strong and no weak; there should be no battle.
  • There should be no wise and no foolish; food should be given to all.
  • There should be no brilliant and no dull; all should be rich.
  • There should be no learned and no ignorant; all should draw favor.

The first is a cry against the vicissitudes of life. The second is a cry against reality itself.


Are people with very high IQs generally happy?

When someone is excluded from a group, ousted or ostracized, it makes him or her unhappy. Loneliness and exclusion is the most common reason for anomie - losing the reason for life and will to live and to commit suicide.

And overtly high IQ is the most certain way to get excluded and ousted. The reason is the communication range. It makes you different and not fitting in.

The concept of communication range was established by Leta Hollingworth. It is +/- 2 standard deviations (roughly 30 points) up or down on one’s own IQ. It denotes the range where meaningful interaction (communication, discussion, conversation and socializing) is possible. If the IQ difference between two persons is more than 30 points, the communication breaks up. The higher IQ person will look like an incomprehensible nerd and the lower IQ as a moronic dullard - and they will not find anything common.

+/- 30 points does not sound much, but once the IQ is past 135, the downsides are imminent. When someone has a perfectly mediocre IQ (100 for Caucasian average), his communication range is from IQ 70 to IQ 130, which covers some 98% of the whole population. But when it is 135, it is from 105 to 165, which is approximately 36% of population. And it gets worse: if it is 162, your whole meaningful set of human interactions is restricted to Mensa qualifying people only (2% of whole population). Good luck for finding friends, acquaintances, colleagues - or spouse.

And it gets worse.

When the average IQ of a group is lower than the lower end of your communication range, the group will see you as a hostile outsider. They will do anything to bully you out of their presence. They will ostracize, excommunicate and oust you amongst themselves.

Source: Susanna Viljanen's answer to Are people with very high IQs generally happy? - Quora


Original MVC Resources from Reenskaug

This is from Trygve Reenskaug: MVC: Xerox PARC 1978-79. Some quotes:

The essential purpose of MVC is to bridge the gap between the human user's mental model and the digital model that exists in the computer. The ideal MVC solution supports the user illusion of seeing and manipulating the domain information directly. The structure is useful if the user needs to see the same model element simultaneously in different contexts and/or from different viewpoints.

...

MVC was conceived as a general solution to the problem of users controlling a large and complex data set. The hardest part was to hit upon good names for the different architectural components. Model-View-Editor was the first set.

After long discussions, particularly with Adele Goldberg, we ended with the terms Model-View-Controller.

...

The MVC problem has more facets than I realized in 1979. I started working on a pattern language to disentangle the different aspects, that last draft was dated August 20, 2003. The plan was that it should be improved by a group of authors, not just the current single one. Unfortunately, the projct died at his point.