YaWiki 0.21 beta released

This is a security-fix release; all users are strongly encouraged to upgrade to the new version. You can get it from yawiki.com. The change notes are:

* Security Fix: In the default template set, added a paranoid number of htmlspecialchars() to help prevent cross-site scripting attacks; it should only matter in the one specific template, but you never know.

* Schema Change: Added a column to yawiki_areas. Run the “docs/MIGRATE_020_021” SQL code against your database.

* Added file “changes.php” for quick change listings (thanks Del!)

* Area administrators can now clear page locks via the area_pages.php script (thanks Del!)

* The top-level navigation elements are now always populated, even for pages not on the AreaMap

* Added file “referrals.php” to show external referrals


And now for a few words spoken more from anger and frustration than from anything else:

I understand that there are bad guys (“black hats”) out there who want to hijack sites and/or prove their [email protected] l33t ski11z at picking apart other people’s software. Black hats don’t give notice in advance that they are “testing” or “probing” a live site. Black hats don’t give notice after-the-fact, either, unless it’s for monetary gain.

Recently, some YaWiki-based sites, specifically those with open comments enabled, have come under attack by someone testing for cross-site scripting vulnerabilities. I’m pretty sure this person thinks he is a good guy, or a white hat, because most of the testing consists of variations on <test_xss> strings. However, he hasn’t notified any of the site owners in advance that he is testing the site, and he certainly hasn’t notified *me* of any possible flaws in the software. This places him squarely in the “black hat” category. White hats give advance notice; even better, white hats only test their own systems, not those belonging to other people (unless invited to do so).

The kicker is that I have reason to believe this person is a well-known PHP developer (at least in certain circles). If it is in fact this person, his behavior is at best profoundly unprofessional, and at worst unethical; he should be ostracized from the community until he apologizes for his actions.

Update: It’s not who I thought, thank goodness; it would have been quite a blow. However, I have one other suspect; I hope it’s not that person either, because it would be an even bigger blow. Regardless, the attacker should at least contact me and let me know what he’s found.

Update: I think Pierre-Alain has somewhat missed my point in his blog entry about this. My contention is that, if you probe a site (that is not yours) in this way, you’re not part of the solution, you’re part of the problem. Black hats are under no obligation to provide notification to the subject of their “experiments” which is part of why they’re bad guys. White hats obligate themselves to a higher standard; that’s part of why they’re good guys. Simple courtesy among community members is generally a good goal to aim for, and telling people what you’re doing is part of that courtesy.

Are you stuck with a legacy PHP application? You should buy my book because it gives you a step-by-step guide to improving your codebase, all while keeping it running the whole time.

9 thoughts on “YaWiki 0.21 beta released

  1. This “well-known PHP developer” is known by all (serious) PHP developers actually. You’ve no idea what you talk about I guess.

  2. Dammit, I’d be curious to hear your thoughts on who this person is, I think I know also.

    They attacked my old Cerebral Cortex wiki, the new one was never touched. It also seems that none of the “attacks” were successful.

    I love YaWiki, but I hate upgrading, as I’ve heavily customiized it (though the customizations are not live yet, I’ll mail you when it is, you’ll like it 🙂

    – Davey

  3. I’ve been observing some unfriendly behavior within the PHP community lately. Here is another example:

    http://benramsey.com/2005/07/06/atlanta-php-july-meeting/#comments

    While I don’t agree that it’s necessarily malicious to do some research without prior warning, I think that any research conducted should be benign, and I think the researcher has an obligation to disclose any discoveries. In the above link, research was conducted that uncovered some security vulnerabilities, but the maintainer of the target site is being taunted rather than informed.

  4. Paul, I’d also be curious to know what you believe would be an ethical protocol for research in general. For example, if I visit a site and try a benign XSS attack while there, and I discover that the site has proper filtering and escaping in place, have I done something unethical?

    Another question would be whether the maintainer of a site has the right to refuse such tests in the case that advance notice is given. Is it just a notice that is necessary or a request for permission? As a user of an application, I am a likely victim of XSS, so I feel that I should have a right to know whether the application is vulnerable.

    Anyway, I’m not trying to spark a debate but am genuinely curious to see your opinion. The ethical issues surrounding security research are topics that frequent my mind.

  5. “Benign attack”- it sounds like one of those double-speak terms the Pentagon comes up with.

    Look at it this way- let’s say the owner of a bank forgot to lock the front door after closing. Does his/her oversight mean its ethical for me to enter that bank during the night even if I don’t take anything?

    Is it ethical to walk around town after hours rattling bank doors just because I can?

    What would the police say while they are hauling me down to lockup?

    Failing at an unethical action does not negate the unethical-ness (so to speak) of
    the attempt. That’s why we have attempted murder, etc.

    Don’t f&%k with people or their stuff (regardless of intention) unless you have permission first.

  6. Chris, it wasn’t a taunt. Just a badly worded poke for Ben to contact me about problems on that site.

    In Paul’s case I didn’t touch his site. I scanned twiki.pear.php.net and had no idea that Paul was involved and contacted the pear folks. There are also multiple scanners out in the wild now, and I have seen at least two cases of people pretending to be me in blog comments, so be a bit careful about atributing things to me that aren’t necessarily so. For example, the second comment with my name on it on Ben’s site was not me.

    My scans are completely benign. The defacement on the twiki site was not done by my scanner. I go to great lengths to not follow links that actually write to the backend if at all possible, and I do try to contact the people behind the sites as quickly as I can. The whole point of this is to raise some awareness and get people to think more about input filtering.

  7. C Drozdowski, do you understand what XSS is? Equating checking for XSS holes in a non-destructive way to entering a bank and roaming around doesn’t really work. At most you could compare it to a port scan where you check to see which ports are open on a server, but even that is a bit more severe since the simple act of hitting a port could be damaging to a server. With an XSS scan you simply send a regular request containing something like to the server and see what the server sends back to you. If you get back unfiltered chances are good that malicious user input could do something nasty either directly, or much more likely indirectly through some sort of spoofing attack where you trick users into visiting a special link which embeds some sort of malicious tags in it.

    So, the benign XSS check is the very first step in figuring out how to attack a server, but in and of itself I wouldn’t call it an attack.

    There are a number of automated tools out there that scan the web every day. Search engines have their crawlers which scan your site without permission. Some of them include link checkers and can inform you if you have broken links. Companies like Netcraft poke your server for your web server version and your server modules. Getting the software versions installed on a server is also one of the first steps needed in order to attack a server which is not very different from an XSS scanner in my opinion. And I don’t think Netcraft informs webmasters if it finds a vulnerable version of the web server or web server modules, and they definitely don’t ask for permission before scanning your site.

    Overall I agree with Chris that this is an interesting question of what is fair game and what isn’t. I think it has to boil down to intent. A bad guy scanning the web and then publishing a list of vulnerable servers based on the server versions vs. Netcraft scanning the web in order to provide useful research data (which they sell, by the way). Compare that to scanning for XSS holes for research purposes to learn how to improve something like PHP to do a better job doing automatic filtering vs. scanning for XSS and then posting or selling lists of vulnerable sites. I think most would agree that the Netcraft version scan is ok, and I think it follows that a benign XSS scan where the results are used responsibly is ok as well.

  8. Rasmus, I am glad to hear that I was mistaken. 🙂

    Actually, although I did not assume that those posts were from you, I did assume that they were from the same person. That’s why I interpreted it as someone taunting the ATLPHP guys.

    C Drozdowski, I agree with Rasmus that your analogy doesn’t apply very well at all. In fact, it reminds me of the analogies that Slashdot folks like to come up with. 🙂

  9. “Look at it this way- let’s say the owner of a bank forgot to lock the front door after closing. Does his/her oversight mean its ethical for me to enter that bank during the night even if I don’t take anything?

    Is it ethical to walk around town after hours rattling bank doors just because I can?”

    Look at it THIS way: I lend my friend my lawnmower. There have been a number of lawnmower thefts in our neighborhood, recently. I ask my friend to make sure that my lawnmower is sufficiently secured; he assures me that it’s locked in his shed. During my evening walk, I notice that his shed door seems slightly ajar. Is it ethical for me to walk up and take a closer look, to be sure that MY lawnmower is safe?

    The real question is: should we be allowed to check if OUR OWN DATA (session/cookies/etc) is safe, or should we be forced to blindly trust a particular software’s developers/maintainers? Note, before you answer, that the scan in this case was not mailicious and it DID turn up a vulnerability.

    As far as Chris/Rasmus’ conversation about procedure goes…
    Rasmus: does your scanner request and obey robots.txt? I think it most certainly should. I know it would reduce the number of pages you could scan, but it would give content publishers a way to explicitly refuse scanning. Let’s face it: very few people would actually implement this. Plus, I suspect that anyone you’re actually interested in helping would receive the reports with utmost thanks.

    S

Leave a Reply

Your email address will not be published. Required fields are marked *