PHP ssh2.sftp opendir/readdir fix

This bug https://bugs.php.net/bug.php?id=73597 related to the PECL ssh2 extension bit us yesterday, so this post is a public service announcement that will (hopefully) save you from writing your own workaround like I almost did.

Problem: PHP 5.6.28 (and apparently 7.0.13) introduced a security fix to URL parsing, that caused the string interpolation of the $sftp resource handle to no-longer be recognized as a valid URL. In turn, that causes opendir(), readdir(), etc. to fail when you use an $sftp resource in the path string, after an upgrade to one of those PHP versions.

Solution: Instead of using "ssh2.sftp://$sftp" as a stream path, convert $sftp to an integer like so: "ssh2.sftp://" . intval($sftp) . "/". Then it will work just fine.

Thanks to the people who fixed the URL parsing security flaw, thanks to the people who wrote the PECL ssh2 extension, and thanks to the people who provided the fix.

Share This!Share on Google+Share on FacebookTweet about this on TwitterShare on RedditShare on LinkedIn

Conserving On The Wrong Resource

(This is a blog post I’ve had in a “write-me!” folder for years; I am now publishing it in its unfinished form to get it off my mind.)

Programmers are acutely aware of their limited resources: CPU, memory, storage, requests-per-second, screen space, line length, and so on. If programmers are aware of a resource constraint, they conserve on it; sometimes they call this “optimizing.”

Are you aware of a loop? Then you try to “optimize” it, or conserve on it, e.g. by not using a count() in the loop declaration.

Are you aware of the number of lines? You might try to conserve on the number of lines, perhaps by removing blank lines (especially in docblocks).

Are you aware of aware of the size of an array, or the number of objects in the system, and so on? You try to conserve on those, to reduce their number.

But this is sometimes a trap, because you conserve or optimize on a low value
resource you are aware of, at the cost of not conserving on a high-value resource you are not aware of. For example, optimizing a for loop by not using count() in its declaration will have little impact if the loop is executing queries against an SQL database.

Lesson: conserve on high-value resources that you may not be aware of, not low-value ones that are obvious but of little consequence.

Share This!Share on Google+Share on FacebookTweet about this on TwitterShare on RedditShare on LinkedIn

The PHP 7 “Request” Extension

tl;dr: The new request extension provides server-side request and response objects for PHP 7. Use it as a replacement object for request superglobals and response functions. An equivalent PHP 5 version is available in userland as pmjones/request.


You’re tired of dealing with the $_GET, $_POST, etc. superglobals in your PHP 7 application. You wish $_FILES was easer to deal with. You’d prefer to wrap them all in an object to pass around to your class methods, so they’d be easier to test. And as long as they’re all in an object, it might be nice to have parsed representations of the various incoming headers. Preferably, you’d like for it to be read-only, so that the various libraries you use can’t modify superglobal state out from under you.

Likewise, seeing the “Cannot modify header information – headers already sent” warning is getting on your nerves. You’d like to get away from using header(), setcookie(), and the rest. You know they’re hard to inspect, and they’re hard to test. What would be great is to have all that response work wrapped in another object that you can pass around, and inspect or modify it before sending the response back to the client.

You could maybe adopt a framework, but why do that for your custom project? Just a pair of server-side request and response objects would make your life so much easer. Why can’t there be set of internal PHP classes for that?

Well, now there is. You can install the request extension from John Boehr and myself to get ServerRequest and ServerReponse objects as if PHP itself provided them.

ServerRequest

After you install the extension, you can issue $request = new ServerRequest(), and then:

Instead of ...                          ... use:
--------------------------------------- ---------------------------------------
$_COOKIE                                $request->cookie
$_ENV                                   $request->env
$_GET                                   $request->get
$_FILES                                 $request->files
$_POST                                  $request->post
$_SERVER                                $request->server
$_SERVER['REQUEST_METHOD']              $request->method
$_SERVER['HTTP_HEADER_NAME']            $request->headers['header-name']
file_get_contents('php://input')        $request->content
$_SERVER['HTTP_CONTENT_LENGTH']         $request->contentLength
$_SERVER['HTTP_CONTENT_MD5']            $request->contentMd5
$_SERVER['PHP_AUTH_PW']                 $request->authPw
$_SERVER['PHP_AUTH_TYPE']               $request->authType
$_SERVER['PHP_AUTH_USER']               $request->authUser

Likewise:

Instead of parsing ...                  ... use:
--------------------------------------- ---------------------------------------
isset($_SERVER['key'])                  $request->server['key'] ?? 'default'
  ? $_SERVER['key']                     (good for all superglobals)
  : 'default';
$_FILES to look more like $_POST        $request->uploads
$_SERVER['HTTP_CONTENT_TYPE']           $request->contentType and
                                        $request->contentCharset
$_SERVER['HTTP_ACCEPT']                 $request->accept
$_SERVER['HTTP_ACCEPT_CHARSET']         $request->acceptCharset
$_SERVER['HTTP_ACCEPT_ENCODING']        $request->acceptEncoding
$_SERVER['HTTP_ACCEPT_LANGUAGE']        $request->acceptLanguage
$_SERVER['HTTP_X_HTTP_METHOD_OVERRIDE'] $request->method
$_SERVER['PHP_AUTH_DIGEST']             $request->authDigest
$_SERVER['HTTP_X_REQUESTED_WITH']       $request->xhr
  == 'XmlHttpRequest'

Those properties are all read-only, so there’s no chance of them being changed without you knowing. (There are some true immutables for application-related values as well; see the documentation.)

ServerResponse

For responses, you can issue $response = new ServerResponse(), and then:

Instead of ...                          ... use:
--------------------------------------- ---------------------------------------
header('Foo: bar', true);               $response->setHeader('Foo', 'bar');
header('Foo: bar', false);              $response->addHeader('Foo', 'bar');
setcookie('foo', 'bar');                $response->setCookie('foo', 'bar');
setrawcookie('foo', 'bar');             $response->setRawCookie('foo', 'bar');
echo $content;                          $response->setContent($content);

You can inspect the $response object, and then call $response->send() to
send it to the client.

Working with JSON?

// instead of ...
header('Content-Type: application/json')
echo json_encode($value);

// .. use:
$response->setContentJson($value);

Sending a $file for download?

// instead of ...
header('Content-Type: application/octet-stream');
header('Content-Transfer-Encoding: binary');
header(
    'Content-Disposition: attachment;filename="'
    . rawurlencode(basename($file)) . '"'
);
$fh = fopen($file, 'rb+');
fpasthru($fh);

// use:
$fh = fopen($file, 'rb+');
$response->setContentDownload($fh, basename($file));

Building a complex header? Pass an array instead of a string:

$response->setHeader('Cache-Control', [
    'public',
    'max-age' => '123',
    's-maxage' => '456',
    'no-cache',
]); // Cache-Control: public, max-age=123, s-maxage=456, no-cache

$response->setHeader('X-Whatever', [
    'foo',
    'bar' => [
        'baz' => 'dib',
        'zim',
        'gir' => 'irk',
    ],
    'qux' => 'quux',
]); // X-Whatever: foo, bar;baz=dib;zim;gir=irk, qux=quux

Find Out More

You can read the rest of the documentation at https://gitlab.com/pmjones/ext-request to discover more convenient functionality. And if you’re stuck on PHP 5.x for now, the extension has a userland version installable via Composer as pmjones/request.

Try out the request extension today, because a pair of server-side request and response objects will make your life a lot easier.

Share This!Share on Google+Share on FacebookTweet about this on TwitterShare on RedditShare on LinkedIn

Definition of “Done”  in a web project

– Source code meets our coding standards.

– High enough level of unit test coverage for routes, action methods and controllers.

– High enough level of unit test coverage for business logic and repositories.

– High enough level of automated UI and integration test coverage.

– Code has been peer reviewed.

– Code must be completely checked in to the source control system and the build and all the automated tests should be green.

– UI looks nice and works on different resolutions on major browsers and browser editions.

– UI fulfills the accessibility requirements.

– UI works with and without javascript enabled.

– End user help/documentation/tooltips are done.

– Any auditing/tracing code is added and the output is useful and readable.

– Security permission checks have been implemented and validated via automated tests.

– Automated database migration scripts are provided and testedSample data needed to test the feature is scripted, if required.

– Users have tested the feature and are happy with it.

I’ll nitpick and note that “MVC” is not an application architecture, but whatever. Source: Definition of Done in an MVC project

Share This!Share on Google+Share on FacebookTweet about this on TwitterShare on RedditShare on LinkedIn

Avoiding Quasi-Immutable Objects in PHP

tl;dr: Immutability in PHP is most practical when the object properties are scalars or nulls. Using streams, objects, or arrays as properties makes it very difficult, sometimes impossible, to preserve immutablity.


One of the tactics in Domain Driven Design is to use Value Objects. A Value Object has no identifier attached to it; only the combination of the values of its properties gives it any identification. If you change any of the properties in any way, the modification must return an entirely new instance of the Value Object.

This kind of behavior means the Value Object is “immutable.” That is, the particular instance is not allowed to change, though you can get back a new instance with modified values. The code for an immutable object looks something like this:

<?php
class ImmutableFoo
{
    protected $bar;

    public function __construct($bar)
    {
        $this->bar = $bar;
    }

    public function getBar()
    {
        return $this->bar;
    }

    public function withBar($newBar)
    {
        $clone = clone $this;
        $clone->bar = $newBar;
        return $clone;
    }
}
?>

(Note how $bar is accessible only through a method, not as a public property.)

When you create an ImmutableFoo instance, you cannot change the value of $bar after instantiation. Instead, you can only get back a new instance with the new value of $bar by calling withBar():

<?php
$foo = new ImmutableFoo('a');
$newFoo = $foo->withBar('b');

echo $foo->getBar(); // 'a'
echo $newFoo->getBar(); // 'b'
var_dump($foo === $newFoo); // (bool) false
?>

With this approach, you are guaranteed that one place in the code cannot change the $foo object at a distance from any other place in the code. Anything that ever gets that instance of $foo knows that its properties will always be the same no matter what.

The immutability approach can be powerful in Domain Driven Design and elsewhere. It works very easily in PHP with scalar values and nulls. That’s because PHP returns those by copy, not by reference.

However, enforcing immutability in PHP is difficult when the immutable object properties are non-scalar (i.e., when they are streams, objects, or arrays). With non-scalars, your object might seem immutable at first, but mutablity reveals itself later. These objects will be “quasi-“, not truly, immutable.

Streams as Immutable Properties

If a stream or similar resource has been opened in a writable (or appendable) mode, and is used as an immutable property, it should be obvious that object immutability is not preserved. For example:

<?php
file_put_contents('/tmp/bar.txt', 'baz');

$foo = new ImmutableFoo(fopen('/tmp/bar.txt', 'w+'));
$bar = $foo->getBar();
fpassthru($bar); // 'baz'

rewind($bar);
fwrite($bar, 'dib');
rewind($bar);

fpassthru($foo->getBar()); // 'dib'
?>

As you can see, the effective property value has changed, meaning immutability has been compromised.

One way around this might be to make sure that immutable objects themselves check that stream resources are always-and-only in read-only mode. However, even that is not a certain solution, because the resource pointer might be moved by reading operations in different parts of the application code. In turn, that means reading from the stream may yield different results at different times, making the value appear mutable.

As such, it appears that only “read-only” streams can be used as immutable properties, and then only if the immutable object restores the stream, its pointers, and all of its meta-data to their initial state every time the stream is accessed.

Objects as Immutable Properties

Because PHP returns objects as references, rather than as copies, using an object as a property value compromises the immutability of the parent object. For example:

$foo = new ImmutableFoo((object) ['baz' => 'dib']);
$bar = $foo->getBar();
echo $bar->baz; // 'dib'

$bar->baz = 'zim';
echo $foo->getBar()->baz; // 'zim'

As you can see, the value of $bar has changed in the $foo instance. Any other code using $foo will see those changes as well. This means immutability has not been preserved.

One way around this is to make sure that all objects used as immutable properties are themselves immutable.

Another way around this is to make sure that getter methods clone any object properties they return. However, it will have to be a recursively deep clone, covering all of the cloned object’s properties (and all of their properties, etc.). That’s to make sure that all object properties down the line are also cloned; otherwise, immutability is again compromised at some point.

Arrays as Immutable Properties

Unlike objects, PHP returns arrays as copies by default. However, if an immutable object property is an array, mutable objects in that array compromise the parent object’s immutability. For example:

$foo = new ImmutableFoo([
    0 => (object) ['baz' => 'dib'],
]);

$bar = $foo->getBar();
echo $bar[0]->baz;

$bar[0]->baz = 'zim';
echo $foo->getBar()[0]->baz; // 'zim'

Because the array holds an object, and because PHP returns objects by reference, the contents of the array have now changed. That means $foo has effectively changed as well. Again, immutability has not been preserved.

Likewise, if the array holds a reference to a stream resource, we see the problems described about streams above.

The only way around this is for the immutable object to recursively scan through array properties to make sure that they contain only immutable values. This is probably not practical in most situations, which means that arrays are probably not suitable as immutable values.

Settable Undefined Public Properties

Finally, PHP allows you to set values on undefined properties, as if they were public. This means it is possible to add mutable properties to an immutable object:

$foo = new ImmutableFoo('bar');

// there is no $zim property, so PHP
// creates it as if it were public

$foo->zim = 'gir';
echo $foo->zim; // 'gir'

$foo->zim = 'irk';
echo $foo->zim; // 'irk'

Immutability of the object is once again compromised. The only way around this is to impelement __set() to prevent setting of undefined properties.

Further, it might be wise to implement __unset() to warn that properties cannot be unset.

Conclusion

If you want to build a truly immutable object in PHP, it appears the best approach is the following:

  • Default to using only scalars and nulls as properties.
  • Avoid streams as properties; if a property must be a stream, make sure that it is read-only, and that its state is restored each time it is used.
  • Avoid objects as properties; if a property must be an object, make sure that object is itself immutable.
  • Especially avoid arrays as properties; use only with extreme caution and care.
  • Implement __set() to disallow setting of undefined properties.
  • Possibly implement __unset() to warn that the object is immutable.

Overall, it seems like immutability is easiest with only scalars and nulls. Anything else, and you have a lot more opportunity for error.

Remember, though, there’s nothing wrong with fully- or partially-mutable objects, as long as they are advertised as such. What you want to avoid are quasi-immutable objects: ones that advertise, but do not deliver, true immutability.

(For some further reading, check out At What Point Do Immutable Classes Become A Burden?)

Update:

Share This!Share on Google+Share on FacebookTweet about this on TwitterShare on RedditShare on LinkedIn

The Fallacies of Enterprise Computing

The Fallacies of Enterprise Computing is good throughout. It’s hard to excerpt; here only some of the highlights:

  • After building IT systems for more than sixty years, one would think we as an industry would have learned that “newer is not always better”. Unfortunately, this is a highly youth-centric industry, and the young have this tendency to assume that anything new to them is also new to everybody else. And if it’s new, it’s exciting, and if it’s exciting, it must be good, right? And therefore, we must throw away all the old, and replace it with the new.
  • How is “the cloud” any different from “the mainframe”, albeit much, much faster and with much, much greater storage?
  • [B]y using relational database constraints, the database can act as an automatic enforcer of business rules, such as the one that requires that names be no longer than 40 characters. Any violation of that rule will result in an error from the database. Right here, right now, we have a violation of the “centralized business logic” rule.
  • despite the fact that many enterprise IT departments are building microservices, they then undo all that good work by then implicitly creating dependencies between the microservices with no mitigating strategy to deal with one or more of those microservices being down or out. This means that instead of explicit dependencies (which might force the department or developers to deal with the problem explicitly), developers will lose track of this possibility until it actually happens in Production—which usually doesn’t end well for anybody.
  • The enterprise is a constantly shifting, constantly changing environment. Just when you think you’ve finished something, the business experts come back with some new requirements or some changes to what you’ve done already. … [A]nything that gets built here should (dare I say “must”) be built with an eye towards constant-modification and incessant updates.
  • The cloud has nothing magical in it that makes things scale automagically, secure them, or even make them vastly more manageable than they were before. You can derive great benefits from the cloud, but in most cases you have to meet the cloud halfway—which then means that the vendor didn’t make the problem go away, they just re-cast the problem in terms that make it easier for them to sell you things.
  • No matter what the vendor/influencer tries to tell you, no matter how desirable it is to believe, there is no such thing as a “universal enterprise architecture”; not MVC, not n-tier, not client-server, not microservices, not REST, not containers, and not whatever-comes-next.
Share This!Share on Google+Share on FacebookTweet about this on TwitterShare on RedditShare on LinkedIn

FIG Follies, Part 3

This is the third of three posts I intend to make regarding the condition and actions of the FIG, and what they reveal.

The Future

[By way of presenting some context: I was the driving force behind PSRs 1 and 2; I conceived and then led PSR-4; I was a primary author on some of the early bylaws, including the voting protocol and the sponsorship model. I am a founding member of the FIG, and one of the two longest continually-serving members (along with Nate Abele). My perspective on the FIG spans its lifetime of 7+ years. It is that perspective which informs these posts.]

Regarding the rivalrous visions for the FIG, I see only two ways forward.

The first is that somehow the tension between the two visions is permanently resolved. It is this ongoing implicit tension that is the cause of recurring conflict. There is no common vision between all FIG members, and this causes resentment and distrust; sharing a common vision will provide a level of shared values and views.

I think the only way to relax that tension in the group is for one of the rival visions to supersede the other in an unmistakable and permanent way. That means one set of vision-holders would have to explicitly renounce its own vision in favor of the other. This could be accomplished by voice (i.e., by stating so on the mailing list) or by exit (i.e., by leaving the FIG).

Even though that would leave the FIG (per se) intact, I opine that sort of capitulation is unlikely from either set of vision-holders. After such protracted contention, the holders of each vision seem unlikely to withdraw, and even more unlikely to adopt the competing vision for FIG. (As a defender against the introduction of the newer “grand” vision, I certainly don’t to plan abandon the “founding” vision voluntarily.)

The second is that the holders of each vision all surrender simultaneously, and disband the FIG.

Disband The FIG?

There is already some level of interest for this idea, both from the wider community and within FIG itself, especially in relation to the FIG 3.0 proposal. From the thread Alternative to FIG 3.0 – Is it time to call FIG complete? (you should read the whole thing) we have these points from Joe Ferguson in two different messages:

As someone who feels like FIG 3.0 is a step in the right direction, and can empathize with those against it, I’d like to offer an observation:

Maybe it’s time to call FIG complete. FIG accomplished it’s primary goals (Bringing frameworks together to build interoperable packages / standards).

My comment is that maybe its time PHP get a standards body, led by the people who want to see FIG 3.0 happen. Let your work in FIG stand on it’s own. PSRs aren’t going anywhere. Start a Standards Body Group that adopts the work you’ve done here and picks up all the currently in progress PSRs and continues working on them. Close FIG down since it’s work as a Framework Interoperability Group has been completed.

This would be a perfect time to start fresh, get your bearings, and sort your stuff out with all the lessons learned from FIG. I would rather see this than people oppose FIG 3.0 and it being shoehorned in.

If you take away absolutely nothing else from my comments: FIG 3.0 is such a shift that it warrants the consideration of a new organization with leadership based on the people doing the work to make things happen in FIG.

I feel a sort of resignation toward this idea. I don’t really enjoy it, but something like it has to happen, and it’s probably for the best.

It certainly would eliminate the tension between the competing visions for the FIG, as there would be no FIG to contest over.

Benefits

Let’s say, for a moment, that FIG dissolves itself, and the FIG 3.0 plan architected by Larry Garfield and Michael Cullum is realized as a new organization with a new name. What would be the benefits?

  • As a brand-new organization, it can define its vision and mission without in-group rivalry.

  • It can curate its membership, bringing in only the people and projects that adhere to its vision (and keeping out those who do not).

  • It can establish any organizational structure (hierarchical or otherwise) from the outset, without having to worry about precedent or prior expectations.

  • It can have any code of conduct it wants as part of its foundational structure.

  • Whatever negative baggage is perceived as being part of the FIG is dropped.

I’m sure there are other benefits as well.

Drawbacks

There is one major drawback that I’ve heard voiced regarding a “disband the FIG and start a new group” approach. It is that new standards groups have a hard time getting off the ground. As one example, the PHP-CDS project does not seem to have gained much traction.

My guess is that an organization that forms after the FIG disbands would not have that trouble, especially given that FIG would no longer exist.

Note that some member projects such as Composer, Drupal, PEAR, phpBB, and Zend Framework have shown explicit support for the FIG 3.0 re-organization plan. I imagine that those projects alone would grant legitimacy to any new non-FIG group in which they were members.

Furthermore, as the architects of the FIG 3.0 plan, Larry Garfield is the Drupal representative to FIG, and Michael Cullum (while not a voting member) is directly involved with phpBB. The new group is guaranteed to have at least those two members from the start.

I have heard other drawbacks as well, but they seem primarily administrative in nature.

Administrative Issues

What happens to the accepted PSRs?

Accepted PSRs remain in perpetuity as the true product of the FIG, for any and all to adopt or ignore as they see fit. They were built to be immutable, and they will live on for as long as anyone cares to refer to them. Any new group that forms after the FIG would be free to adopt, or ignore, the finished PSRs (though it would not be free to claim them as their own).

What happens to the in-process PSRs?

There are at least two options here.

  1. In-process PSRs might convert to *-interop projects, a la container-interop and async-interop. Indeed, Woody “Shadowhand” Gilk has had the foresight and initiative to create an http-interop group already.
  2. If the migration out of FIG produces a new group, e.g. a FIG 3.0 style organization, that other group might adopt them in some fashion, perhaps by making those PSR owners initial members in the new group.

What about new PSRs?

After disbanding, no new PSRs would be admitted in any state.

What about the FIG website, Twitter account, Github account, etc. ?

I think this is the toughest problem with disbanding. It will be necessary to prevent the misappropriation of the FIG or PSR names after the group disperses, and at the same time make sure the various artifacts of the FIG (i.e., the PSRs) continue to be publicly available.

To that end, I suggest appointing an archivist (perhaps more than one) to receive ownership of the Github, Twitter, Google Mail, etc. accounts, as well as the FIG domain names. The job of archivist is is to hold the FIG name and accounts in trust and guard them from ever being used again. The archivist(s) might also merge errata and insubstantial changes on existing accepted PSRs, but I expect the activity on those should be minimal.

I assert that the archivist should be some nominally neutral party, one who has no designs on the FIG name or brand, and who has not participated in the production of PSRs or bylaws.

Finally, I would suggest that no follow-on group be allowed to claim to be the “successor” to the FIG, or somehow “approved” by FIG, but that’s a difficult thing to prevent. Members exiting the FIG would have to be on their honor for that one.

Conclusion

There is a tension in the FIG between two rival visions: a “founding” vision (mostly implicit through the history of the group) and a “grand” vision (explicitly described in the FIG 3.0 proposal). This tension reveals itself through things like the vote to remove me involuntarily, as well as ongoing conflict over internal processes, the direction and audience of PSRs, and other bureaucratic maneuvers.

The only ways to relax that tension are for one vision to supersede the other permanently and explicitly, or for the FIG to disband so there is no organization to contend over. Whereas I feel the former is unlikely to occur, the latter becomes the better course.

Disbanding leaves the products of the FIG intact (i.e., the PSRs). It also frees the holders of the “grand” vision to form a group of their own in their own way, to achieve their own ends without a competing group in place, and without compelling the holders of the “founding” vision to accede to a vision they do not share. The administrative issues of disbanding are minor in comparison.

Share This!Share on Google+Share on FacebookTweet about this on TwitterShare on RedditShare on LinkedIn

Fig Follies, Part 2

This is the second of three posts I intend to make regarding the condition and actions of the FIG, and what they reveal.

The Present

The complaint against me (and the subsequent show trial and vote) are only a symptom of an underlying cause. The true cause behind the complaint, as well as many other events, is that there is a rivalry of visions regarding the FIG.

One vision holds that the FIG should remain true to its originating mission: focus on member projects, find existing commonalities among them through research and discussion, and codify those commonalities as PSRs. It is a more bottom-up approach, and is represented mostly implicitly, in the minds and actions of those who hold it. (Some artifacts of this vision remain on the FIG website, but I realize now they are not explicit enough.) I will call this the “founding” vision.

Another vision holds that the FIG should change its mission and broaden its scope to the entirety of PHP land, and in doing so accept the role that some people feel it should have: that of an overarching PHP Standards Group for all PHP coders, member projects or not. It is a more top-down approach, and is represented explicitly by the FIG 3.0 proposal. I will call this the “grand” vision.

These two visions are mutually exclusive. They are rivalrous.

This rivalry of visions might not matter, if the FIG had not already gained a level of perceived authority to many people in PHP land. It has earned a level of legitimacy through wide adoption of PSRs 0, 1, 2, 3, and 4 (and to some extent 6 and 7). That authority and legitimacy are valuable commodities, hard to come by among PHP developers (who are notoriously independent-minded). That means the FIG is perceived as a high-value property, which makes it worth being rivalrous over.

Those holding the “founding” vision, under which the vast majority of accepted PSRs have been published, believe the high value of the FIG derives from that “founding” vision and mission. Those holding the “grand” vision wish to use that value to launch what is effectively a new organization, without any achievements of its own, and claim the value built under the “founding” vision as its own.

To me, as a holder of the “founding” vision, the “grand” vision is an expression of conceit, of hubris. I think the holders of the “grand” vision believe they are entitled somehow to capture the successes earned by the “founding” vision, and claim those successes as their own. They have not earned those successes through the vision they espouse.

Until the conflict between these visions has been resolved, the contention within the FIG will continue, because neither set of vision-holders wants to relinquish the FIG territory.

I see only two ways out of this dilemma. I will present them not tomorrow, but the day after.

Share This!Share on Google+Share on FacebookTweet about this on TwitterShare on RedditShare on LinkedIn

FIG Follies, Part 1

This is the first of three posts I intend to make regarding the condition and actions of the FIG, and what they reveal.

The Past

The show trial and subsequent vote to remove me has concluded, and I remain: the complainants were defeated, 15 to 9.

Now that the vote is done, I can assert openly that this was a “clearing the decks” operation. It was intended (in large part) to remove the most-vocal opponent to the FIG 3.0 proposal by Larry Garfield and Michael Cullum, and to prepare the way for implementing the Contributor Covenant (or some other SJW-inspired code of conduct). I predicted that conversations about both would resume very soon after the vote no matter which way it went, and that looks to have been prescient.

The complainants, and their secretarial collaborator, wanted a vote (not mediation) from the outset. I guess they figured it would be a slam-dunk to have me removed. What they didn’t expect was that roughly half of the participants would be either against my removal, or against the complainants themselves.

So instead of a slam-dunk, they had actual resistance on their hands. That’s why the secretarial collaborator dragged it out past the 2-week point, so there could be some chance of rallying support for the “removal” side. Little support was raised that was not shortly pushed-back against.

Then the complainants realized they had no options other than a vote, which they now thought they might lose. This is why they revived the idea of “alternative resolutions”. But they themselves presented no alternatives other than “shut up” and “go away”.

Even at the end, to keep their actions and their bias hidden, the secretaries suggested (to me personally) making the vote private, on authority they have not been granted.

Remember: the secretaries, in particular Michael Cullum, overstepped their bounds once again to enable this drama.

Even so, I must caution against reading too much into the results of the vote. The voters did not approve of me per se, so much as they disapproved of the complainants, the complaint itself, or the act of throwing someone out. It is not so much a vindication for me personally, as a repudiation of the complainants.

This is now all in the past, and a permanent part of the FIG. Tomorrow I will talk about the current state of the FIG.

Share This!Share on Google+Share on FacebookTweet about this on TwitterShare on RedditShare on LinkedIn

Exporting Globals in PHP

I am currently modernizing a legacy PHP application for a client. (The codebase was written earlier this year, in fact; new code can be “legacy” from the outset.) The original developer pulled a dirty trick with global that I had not seen before, and I thought I had seen everything.

Legacy codebases often use global to import a variable into the local scope, usually a global function. For example, they might drag in a database connection:

<?php
// define $db in a config file somewhere
$db = new DatabaseConnection(...);

// this function uses the $db connection via global
function fetch_user_by_id($id)
{
    global $db;
    return $db->fetchAssoc("SELECT * FROM users WHERE id = ?", $id);
}
?>

I see that kind of thing all the time in legacy PHP. However, what I have not seen before is a function exporting a global.

Take a look at the following code. If the $bar variable is not already defined in the global scope, PHP will define it in the global scope for you automatically when you call foo().

<?php
error_reporting(E_ALL);

function foo()
{
    global $bar;
    $bar ++;
}

// $bar is not defined yet, so PHP will show an
// "undefined variable" notice
echo $bar. PHP_EOL;

// calling foo() defines $bar in the global scope,
// and increments it
foo();

// $bar is now available in the global scope, having
// been exported from function foo()
echo $bar. PHP_EOL;

The legacy developer did that because he wanted to keep the variable initialization outside of the global scope for some reason, even though he used the variable in the global scope elsewhere.

It is exceptionally difficult to track down where an exported global is coming from when refactoring a legacy application. If you must write legacy code using globals, initialize them in the global scope. Better yet, don’t use globals at all: pass values as function arguments, or use dependency injection techniques.

Share This!Share on Google+Share on FacebookTweet about this on TwitterShare on RedditShare on LinkedIn