Regarding an earlier post about XSS testing, Chris Shiflett made a couple of good comments asking about the nature of "an ethical protocol for research" when it comes to testing for security. I want to address them, but it will take quite a long post. So you've been warned. :-)

Shiflett's comments were these:

July 8th, 2005 at 18:05

I've been observing some unfriendly behavior within the PHP community lately. Here is another example:

http://benramsey.com/2005/07/06/atlanta-php-july-meeting/#comments

While I don't agree that it's necessarily malicious to do some research without prior warning, I think that any research conducted should be benign, and I think the researcher has an obligation to disclose any discoveries. In the above link, research was conducted that uncovered some security vulnerabilities, but the maintainer of the target site is being taunted rather than informed.

...

July 8th, 2005 at 18:14

Paul, I'd also be curious to know what you believe would be an ethical protocol for research in general. For example, if I visit a site and try a benign XSS attack while there, and I discover that the site has proper filtering and escaping in place, have I done something unethical?

Another question would be whether the maintainer of a site has the right to refuse such tests in the case that advance notice is given. Is it just a notice that is necessary or a request for permission? As a user of an application, I am a likely victim of XSS, so I feel that I should have a right to know whether the application is vulnerable.

Anyway, I'm not trying to spark a debate but am genuinely curious to see your opinion. The ethical issues surrounding security research are topics that frequent my mind.

Other people, including Rasmus Lerdorf himself, had plenty to say as well; I'll try to address the full range of commentary, but if I miss someone, it's a simple oversight, not a snub.

To answer Shiflett's comments thoroughly, I did a little research of my own. I figured the best way to go would be to relate network testing in general to web application testing in specific; by starting with general network testing, we can see how it applies to the subset of the network concerned with web applications. With that in mind, I spent about three hours on Google and collating the results.

The Google results were enlightening but not comprehensive. My two main searches were on the the following keyword phrases:

  • network security scanning practices
  • network security risk assessment
  • vulnerability scanning policy
  • penetration testing web application
  • ethical hacking web applications

Defnition of Terms

The most important thing I got from it was a good definition set for kinds of testing (what Shiflett loosely calls "research" above). It appears that there are three general disciplines for this kind of thing. The first, "risk assessment" which appears to be a way of determining, for insurance and continuity purposes, where the risks are in your organization (including non-technical portions) and figuring out how dangerous they are and what their breach will cost you. It seems to be largely non-technical by itself, so I'm going to ignore it for purposes of this dicussion.

The other two disciplines or categories, though, are very technical, and as such they apply to our discussion. The clearest definitions come from http://www.jjbsec.com/ethical-hacking.html and are as follows:

Vulnerability assessment involves attempting to find all vulnerabilities in a network. It is considered "wide and shallow," in that you generally spend a great deal of time and resources checking many machines for many possible vulnerabilities.

...

Penetration testing more closely simulates an actual attack on your site. Considered "narrow and deep," it may focus on one or a few vulnerabilities systems, but focuses much more time and resources on those systems.

Vulnerability assessment is what we traditionally think of as network scans: port scanning, checking to see if patches have been applied to operating systems, and that kind of thing. They are non-specific in nature and attempt to find a broad swath of vulnerabilities.

Penetration testing, on the other hand, is very specific. I think this more accurately represents the kind of "research" that Shiflett is talking about above; it certainly is the kind of testing that was levied against YaWiki-based sites last week, which led to my earlier posting. For these reasons, I will not use Shiflett's term "research" for the rest of this article; I will instead use the terms "vulnerability assessment" and "penetration testing."

Another term I came across was "ethical hacking" (as evidenced by the website URL for the above definitions). Clearly, there are other people out there attempting to define a framework of ethics for "hacking" or testing of computers, networks, and applications. It turns out that this made my further research much easier; there is even at least one "ethical hacking" certification test.

Finally, I discovered a wide range of network security policies, mostly from educational institutions. Two general themes were apparent: (1) if you are on their network, you agree to allow your system to be scanned; and (2) scans would be initiated with 24 hours notice, to come from specified servers, unless there was an immediate ongoing threat to the stability of the network itself. The reason given for advance notification was so that system administrators would know they were not under attack, but were being subject to a routine assessment.

You can see some of the policies here:

Ethical Hacking

In addition to the three hours of Googling, I spent fifteen minutes at Borders; the time spent was like uncovering gold. I found five obviously relevant books, and I'll quote two. They make it very clear what an "ethical" behavior is with respect to vulnerability assessment and penetration testing (their shorthand for the phrase is "hacking").

Hacking for Dummies, page 15, states "Approval for ethical hacking is essential." It restates and amplifies on page 29, "Getting approval for ethical hacking is critical."

The offical course material from the "Ethical Hacking" certification, Exam 312-50 Preparation, states in module 1 page 11 ...

Their [ethical hackers'] job is to evaluate the security of targets of evaluation and update the organization regarding the vulnerabilities of the discovered, and appropriate recommendations to mitigate the same.

OK, so it's poorly worded, but it's pretty clear. For hacking to be considered as being within ethical bounds, you have to not only evaluate the target, but tell the organization (1) what vulnerabilities you discovered, and (2) how to fix it.

The "Ethical Hacking" preparation materials further state in module 1 page 31, and I am paraphrasing here, that "a formal contract to evaluate the target is required."

Well that pretty much sums it up. If you want to do vulnerability assessments or penetration testing, you need the approval of your target. If you don't have approval, it's not ethical.

Discussion of Consequences

Now that we have an accurate definition of the ethics of hacking, we can get back to the situation at hand, and talk about it in terms of penetration testing and vulnerability assessment.

Notification

If you are testing machines in your own organization, why would need to notify the targets of your assessment? We're all on the same team, right? Well, let's take a look at the following example. One of these is a cross-site scripting penetration test from a friendly assessor, and the other is the first in a series of attempts to find weaknesses for exploitation by an enemy. They're entered as values in a comment field.

Post 1 Post 2
<test_xss /> <test_xss />

Again, you have received no notification for the testing event. Which one is merely the first of a series of attempts to exploit your server, and which is "benign"? There's simply no way tell.

Attackers, of course, will give you no notification; that's why they're "bad guys." A good guy, or someone on your team who is assessing the security of your organization's applications, will have told you he was going to do it; that way, you don't see the post and immediately wonder what's going on, if you're under attack, if you need to check the logs for attempts on other services, and so on.

So the reason for notification is essentially a social reason; it's good manners, intended to make sure the target is at ease and does not wonder about the nature of the activity.

Thus, when Shiflett states,

While I don't agree that it's necessarily malicious to do some research without prior warning, I think that any research conducted should be benign, and I think the researcher has an obligation to disclose any discoveries.

I can only partially agree with him. While it is not necessarily malicious, there is no way for a responsible system administrator to view the activity as anything other than at least potentially malicious. Perhaps it is a "shot across the bow" indicating possible further, and more serious, attempts to compromise the system, or an artifact of other or simultaneous attacks. While the activity may in fact be completely benign, the target has no way of knowing for sure.

It is for these reasons that I believe it is absolutely necessary, from a standpoint of good manners if not strictly from ethics, to provide advance notification to targets of vulnerability assessment and penetration testing.

Approval

The stipulation on notification, of course, only applies to systems that are yours, or within your domain under your organization. If a system is not yours, simply sending notification that you are going to attempt a penetration test (however benign) does not clear you, ethically speaking. To remain within the bounds of ethics, you cannot test a system that is not yours without explicit approval. This applies even to testing of public web applications.

I can hear the justifications already. "I want to help improve security." "I want to prevent future attacks." "I want to make sure that PHP is seen as a secure language." While these are all honorable goals, the intent does not matter; it's not your site, so you have to get approval if you want to remain ethical.

The justification that "it's a public site, anyone can do to it whatever they want", while true from the point of view of technology, does not clear you ethically; you must have permission to attempt penetration testing, regardless of the public nature of the site. Similarly, the justification "but they're so poorly protected, if they were smarter they'd do better, so it's OK" is not good enough either. That the target is fat, dumb, and happy, does not give you ethical clearance to perform penetration testing or vulnerability assessment without the target's approval. (You could do it anyway; it's easy, but it's not ethical.)

Lucky for us, we're an open-source crowd, so the "approval" workaround is very easy. If you want to test application security, download the application, install it on your own site, and test that. Problem solved: you get to test the application on a target that has given his permission (you do give yourself permission, right?).

Now to apply this reasoning to Shiflett's comments:

Paul, I'd also be curious to know what you believe would be an ethical protocol for research in general. For example, if I visit a site and try a benign XSS attack while there, and I discover that the site has proper filtering and escaping in place, have I done something unethical?

Yes, it is unethical by the standards noted above: it's not your site, he refused your offer to help secure his site, and that's it. It seems like such a simple thing; it's just a bit of text in a comment form, right? But you are doing it as a penetration test without his permission, and given our earlier ethical framework, it's clearly outside ethical bounds. That you tried it, and it failed, means you won't get caught; it does not mean you have behaved ethically.

But we have a workaround: if he's using a piece of software that you can acquire yourself, you can install it on your own system and test that instead. Is it less convenient? Yes. Does it take more time? Yes. But if you are doing serious research, not just penetration testing, then you are ethically bound to honor his wishes and test only your own installation of that software.

Another question would be whether the maintainer of a site has the right to refuse such tests in the case that advance notice is given.

Again, given the ethical framework from above, he has every right to refuse. He also has every right to request a penetration test, and you may accept or refuse his request as you wish. It's not your site, so you must (ethically) respect his wishes. If his site is within your organization, then you have recourse to the organizational policy for such testing.

Is it just a notice that is necessary or a request for permission?

Given the ethical framework, permission is necessary; a simple notice is not sufficient if the system is not yours.

As a user of an application, I am a likely victim of XSS, so I feel that I should have a right to know whether the application is vulnerable.

Yes, you do have a right to know. Install the application on your own system, and test that. If your concern is "application testing" then this is perfectly reasonable. If your concern is "site testing" (not the application, but the site) then we're playing a different game, one that involves other people, and their permission is required if you wish to remain within ethical bounds.

Disclosure

Finally, anything you discover from your vulnerability assessments and penetration testing must be revealed to the target organization in a timely manner. Essentially, as soon as you know what the problems are, they should be told what the problems are ... and how to fix them, if you know how.

A General Approach

So the ethical framework is pretty easy, if annoying; the key is, "Who owns the site?" If the site is not yours, you need permission to perform vulnerability assessments and penetration testing, including "benign XSS testing" and "security research" (or whatever other euphemism you may wish to coin). If the site is within your organization, then you need to follow your organizational policy for testing, and that is almost certain to include a notify-then-wait clause. These points might make "research" less convenient for you, but you will have the knowledge that you have behaved in a proper and ethical manner.

Of course, there is nothing that can physically stop you from going ahead without permission; but if you do, your behavior will have passed from ethical into un-ethical. That's the thing about ethics: nobody can make you adhere to them, you have to make yourself adhere to them. They have to be their own reward. (And in extreme cases, not being ethical can lead to punishment; they say the last step in the security checklist is "and then you go to jail." ;-)

Conclusion

The last thing Chris Shiflett said was:

Anyway, I'm not trying to spark a debate but am genuinely curious to see your opinion. The ethical issues surrounding security research are topics that frequent my mind.

They come to my mind more frequently of late, too. Thanks for asking; if you had not, I would not have been prompted to do this research. It's not exhaustive by any means; hell, I only spent 3 hours on Google, 15 minutes at the bookstore, and a couple of hours writing up this essay. But it's a good solid start.

And hell, let's have a debate, I seem to start one every time I open my damn mouth. ;-)

(p.s. Regarding my research -- if you want a list of sources, I can send the links on request. There's a lot of them, and I don't want to clutter the page with them here.)

UPDATE (2005-07-10): Chris Shiflett posts a limited response; there is more discussion in his comment section.

UPDATE (2005-07-11): The Web Application Security Consortium mailing list archives seem to support the "approval" framework; see this link. They all seem to be security professionals, and approach the discussion from a United States law point-of-view. Read the whole thread; very illuminating.

Are you stuck with a legacy PHP application? You should buy my book because it gives you a step-by-step guide to improving you codebase, all while keeping it running the whole time.