Ethics and Security

Regarding an earlier post about XSS testing, Chris Shiflett made a couple of good comments asking about the nature of “an ethical protocol for research” when it comes to testing for security. I want to address them, but it will take quite a long post. So you’ve been warned. 🙂

Shiflett’s comments were these:

July 8th, 2005 at 18:05

I’ve been observing some unfriendly behavior within the PHP community lately. Here is another example:

http://benramsey.com/2005/07/06/atlanta-php-july-meeting/#comments

While I don’t agree that it’s necessarily malicious to do some research without prior warning, I think that any research conducted should be benign, and I think the researcher has an obligation to disclose any discoveries. In the above link, research was conducted that uncovered some security vulnerabilities, but the maintainer of the target site is being taunted rather than informed.

July 8th, 2005 at 18:14

Paul, I’d also be curious to know what you believe would be an ethical protocol for research in general. For example, if I visit a site and try a benign XSS attack while there, and I discover that the site has proper filtering and escaping in place, have I done something unethical?

Another question would be whether the maintainer of a site has the right to refuse such tests in the case that advance notice is given. Is it just a notice that is necessary or a request for permission? As a user of an application, I am a likely victim of XSS, so I feel that I should have a right to know whether the application is vulnerable.

Anyway, I’m not trying to spark a debate but am genuinely curious to see your opinion. The ethical issues surrounding security research are topics that frequent my mind.

Other people, including Rasmus Lerdorf himself, had plenty to say as well; I’ll try to address the full range of commentary, but if I miss someone, it’s a simple oversight, not a snub.

To answer Shiflett’s comments thoroughly, I did a little research of my own. I figured the best way to go would be to relate network testing in general to web application testing in specific; by starting with general network testing, we can see how it applies to the subset of the network concerned with web applications. With that in mind, I spent about three hours on Google and collating the results.

The Google results were enlightening but not comprehensive. My two main searches were on the the following keyword phrases:

  • network security scanning practices
  • network security risk assessment
  • vulnerability scanning policy
  • penetration testing web application
  • ethical hacking web applications

Defnition of Terms

The most important thing I got from it was a good definition set for kinds of testing (what Shiflett loosely calls “research” above). It appears that there are three general disciplines for this kind of thing. The first, “risk assessment” which appears to be a way of determining, for insurance and continuity purposes, where the risks are in your organization (including non-technical portions) and figuring out how dangerous they are and what their breach will cost you. It seems to be largely non-technical by itself, so I’m going to ignore it for purposes of this dicussion.

The other two disciplines or categories, though, are very technical, and as such they apply to our discussion. The clearest definitions come from http://www.jjbsec.com/ethical-hacking.html and are as follows:

Vulnerability assessment involves attempting to find all vulnerabilities in a network. It is considered “wide and shallow,” in that you generally spend a great deal of time and resources checking many machines for many possible vulnerabilities.

Penetration testing more closely simulates an actual attack on your site. Considered “narrow and deep,” it may focus on one or a few vulnerabilities systems, but focuses much more time and resources on those systems.

Vulnerability assessment is what we traditionally think of as network scans: port scanning, checking to see if patches have been applied to operating systems, and that kind of thing. They are non-specific in nature and attempt to find a broad swath of vulnerabilities.

Penetration testing, on the other hand, is very specific. I think this more accurately represents the kind of “research” that Shiflett is talking about above; it certainly is the kind of testing that was levied against YaWiki-based sites last week, which led to my earlier posting. For these reasons, I will not use Shiflett’s term “research” for the rest of this article; I will instead use the terms “vulnerability assessment” and “penetration testing.”

Another term I came across was “ethical hacking” (as evidenced by the website URL for the above definitions). Clearly, there are other people out there attempting to define a framework of ethics for “hacking” or testing of computers, networks, and applications. It turns out that this made my further research much easier; there is even at least one “ethical hacking” certification test.

Finally, I discovered a wide range of network security policies, mostly from educational institutions. Two general themes were apparent: (1) if you are on their network, you agree to allow your system to be scanned; and (2) scans would be initiated with 24 hours notice, to come from specified servers, unless there was an immediate ongoing threat to the stability of the network itself. The reason given for advance notification was so that system administrators would know they were not under attack, but were being subject to a routine assessment.

You can see some of the policies here:

Ethical Hacking

In addition to the three hours of Googling, I spent fifteen minutes at Borders; the time spent was like uncovering gold. I found five obviously relevant books, and I’ll quote two. They make it very clear what an “ethical” behavior is with respect to vulnerability assessment and penetration testing (their shorthand for the phrase is “hacking”).

Hacking for Dummies, page 15, states “Approval for ethical hacking is essential.” It restates and amplifies on page 29, “Getting approval for ethical hacking is critical.”

The offical course material from the “Ethical Hacking” certification, Exam 312-50 Preparation, states in module 1 page 11 …

Their [ethical hackers’] job is to evaluate the security of targets of evaluation and update the organization regarding the vulnerabilities of the discovered, and appropriate recommendations to mitigate the same.

OK, so it’s poorly worded, but it’s pretty clear. For hacking to be considered as being within ethical bounds, you have to not only evaluate the target, but tell the organization (1) what vulnerabilities you discovered, and (2) how to fix it.

The “Ethical Hacking” preparation materials further state in module 1 page 31, and I am paraphrasing here, that “a formal contract to evaluate the target is required.”

Well that pretty much sums it up. If you want to do vulnerability assessments or penetration testing, you need the approval of your target. If you don’t have approval, it’s not ethical.

Discussion of Consequences

Now that we have an accurate definition of the ethics of hacking, we can get back to the situation at hand, and talk about it in terms of penetration testing and vulnerability assessment.

Notification

If you are testing machines in your own organization, why would need to notify the targets of your assessment? We’re all on the same team, right? Well, let’s take a look at the following example. One of these is a cross-site scripting penetration test from a friendly assessor, and the other is the first in a series of attempts to find weaknesses for exploitation by an enemy. They’re entered as values in a comment field.

Post 1 Post 2
<test_xss /> <test_xss />

Again, you have received no notification for the testing event. Which one is merely the first of a series of attempts to exploit your server, and which is “benign”? There’s simply no way tell.

Attackers, of course, will give you no notification; that’s why they’re “bad guys.” A good guy, or someone on your team who is assessing the security of your organization’s applications, will have told you he was going to do it; that way, you don’t see the post and immediately wonder what’s going on, if you’re under attack, if you need to check the logs for attempts on other services, and so on.

So the reason for notification is essentially a social reason; it’s good manners, intended to make sure the target is at ease and does not wonder about the nature of the activity.

Thus, when Shiflett states,

While I don’t agree that it’s necessarily malicious to do some research without prior warning, I think that any research conducted should be benign, and I think the researcher has an obligation to disclose any discoveries.

I can only partially agree with him. While it is not necessarily malicious, there is no way for a responsible system administrator to view the activity as anything other than at least potentially malicious. Perhaps it is a “shot across the bow” indicating possible further, and more serious, attempts to compromise the system, or an artifact of other or simultaneous attacks. While the activity may in fact be completely benign, the target has no way of knowing for sure.

It is for these reasons that I believe it is absolutely necessary, from a standpoint of good manners if not strictly from ethics, to provide advance notification to targets of vulnerability assessment and penetration testing.

Approval

The stipulation on notification, of course, only applies to systems that are yours, or within your domain under your organization. If a system is not yours, simply sending notification that you are going to attempt a penetration test (however benign) does not clear you, ethically speaking. To remain within the bounds of ethics, you cannot test a system that is not yours without explicit approval. This applies even to testing of public web applications.

I can hear the justifications already. “I want to help improve security.” “I want to prevent future attacks.” “I want to make sure that PHP is seen as a secure language.” While these are all honorable goals, the intent does not matter; it’s not your site, so you have to get approval if you want to remain ethical.

The justification that “it’s a public site, anyone can do to it whatever they want”, while true from the point of view of technology, does not clear you ethically; you must have permission to attempt penetration testing, regardless of the public nature of the site. Similarly, the justification “but they’re so poorly protected, if they were smarter they’d do better, so it’s OK” is not good enough either. That the target is fat, dumb, and happy, does not give you ethical clearance to perform penetration testing or vulnerability assessment without the target’s approval. (You could do it anyway; it’s easy, but it’s not ethical.)

Lucky for us, we’re an open-source crowd, so the “approval” workaround is very easy. If you want to test application security, download the application, install it on your own site, and test that. Problem solved: you get to test the application on a target that has given his permission (you do give yourself permission, right?).

Now to apply this reasoning to Shiflett’s comments:

Paul, I’d also be curious to know what you believe would be an ethical protocol for research in general. For example, if I visit a site and try a benign XSS attack while there, and I discover that the site has proper filtering and escaping in place, have I done something unethical?

Yes, it is unethical by the standards noted above: it’s not your site, he refused your offer to help secure his site, and that’s it. It seems like such a simple thing; it’s just a bit of text in a comment form, right? But you are doing it as a penetration test without his permission, and given our earlier ethical framework, it’s clearly outside ethical bounds. That you tried it, and it failed, means you won’t get caught; it does not mean you have behaved ethically.

But we have a workaround: if he’s using a piece of software that you can acquire yourself, you can install it on your own system and test that instead. Is it less convenient? Yes. Does it take more time? Yes. But if you are doing serious research, not just penetration testing, then you are ethically bound to honor his wishes and test only your own installation of that software.

Another question would be whether the maintainer of a site has the right to refuse such tests in the case that advance notice is given.

Again, given the ethical framework from above, he has every right to refuse. He also has every right to request a penetration test, and you may accept or refuse his request as you wish. It’s not your site, so you must (ethically) respect his wishes. If his site is within your organization, then you have recourse to the organizational policy for such testing.

Is it just a notice that is necessary or a request for permission?

Given the ethical framework, permission is necessary; a simple notice is not sufficient if the system is not yours.

As a user of an application, I am a likely victim of XSS, so I feel that I should have a right to know whether the application is vulnerable.

Yes, you do have a right to know. Install the application on your own system, and test that. If your concern is “application testing” then this is perfectly reasonable. If your concern is “site testing” (not the application, but the site) then we’re playing a different game, one that involves other people, and their permission is required if you wish to remain within ethical bounds.

Disclosure

Finally, anything you discover from your vulnerability assessments and penetration testing must be revealed to the target organization in a timely manner. Essentially, as soon as you know what the problems are, they should be told what the problems are … and how to fix them, if you know how.

A General Approach

So the ethical framework is pretty easy, if annoying; the key is, “Who owns the site?” If the site is not yours, you need permission to perform vulnerability assessments and penetration testing, including “benign XSS testing” and “security research” (or whatever other euphemism you may wish to coin). If the site is within your organization, then you need to follow your organizational policy for testing, and that is almost certain to include a notify-then-wait clause. These points might make “research” less convenient for you, but you will have the knowledge that you have behaved in a proper and ethical manner.

Of course, there is nothing that can physically stop you from going ahead without permission; but if you do, your behavior will have passed from ethical into un-ethical. That’s the thing about ethics: nobody can make you adhere to them, you have to make yourself adhere to them. They have to be their own reward. (And in extreme cases, not being ethical can lead to punishment; they say the last step in the security checklist is “and then you go to jail.” 😉

Conclusion

The last thing Chris Shiflett said was:

Anyway, I’m not trying to spark a debate but am genuinely curious to see your opinion. The ethical issues surrounding security research are topics that frequent my mind.

They come to my mind more frequently of late, too. Thanks for asking; if you had not, I would not have been prompted to do this research. It’s not exhaustive by any means; hell, I only spent 3 hours on Google, 15 minutes at the bookstore, and a couple of hours writing up this essay. But it’s a good solid start.

And hell, let’s have a debate, I seem to start one every time I open my damn mouth. 😉

(p.s. Regarding my research — if you want a list of sources, I can send the links on request. There’s a lot of them, and I don’t want to clutter the page with them here.)

UPDATE (2005-07-10): Chris Shiflett posts a limited response; there is more discussion in his comment section.

UPDATE (2005-07-11): The Web Application Security Consortium mailing list archives seem to support the “approval” framework; see this link. They all seem to be security professionals, and approach the discussion from a United States law point-of-view. Read the whole thread; very illuminating.

Are you stuck with a legacy PHP application? You should buy my book because it gives you a step-by-step guide to improving your codebase, all while keeping it running the whole time.

32 thoughts on “Ethics and Security

  1. I think the main flaw in this argument is that sticking a tag in a form isn’t penetration. I will often do a couple of extremely simple non-automated tests when I visit a site just to see if I should trust the site with my user data. For example, before entering any personal data I check to see if the site is using SSL. Most people look for the little lock symbol, but you really should also check to see if the form action url you are about to submit your form to is also SSL, and you want to make sure you don’t get redirected, so I will often do a test with none of my personal data just to make sure the form does the right thing. Am I hacking the site? Should I notify the site owner before I do this that I would like to check to see if my personal data is being handled correctly?

    As part of this I have started typing “> (quote, greater than)foobar and also O’Henry into forms just to see how it handles that. I can’t count the number of sites that fail that basic test and while I do try to inform most of them, sometimes I just move on and take my business elsewhere. I do feel I have the right to do that level of sanity checking before I use the services and I really don’t feel obligated to inform every site. Everyone who has a choice between different things use a set of criteria to make a decision and should be free to do so without an obligation to inform every choice that didn’t meet all the criteria for whatever reason.

  2. Rasmus, I see what you mean when it comes to personal issues. I think that particular case straddles the line: you’re aiming to protect your personal data, and I understand that, and certainly you don’t want to expose yourself to someone else’s problems. There is a necessary tension between protecting yourself and not appearing to harm others.

    At the same time, if you were doing that on a regular basis even when it didn’t concern your own personal information, I would find that to fall into the “vulnerability assessment” or “penetration testing” category.

    Regardless, if you discover an issue, then yes, you should inform the site owner, even if only after-the-fact. You would expect someone to notify you of a problem discovered, however accidentally, with your own sites; as such, you should notify others of problems you discover on purpose.

  3. If the problems were the exception instead of the norm, then I probably would be more dilligent about informing the site owners, or in the case of checking applications, the application authors. But when 9 out of 10 things you look at have problems, it really starts to be a time suck to get into lengthy discussions with each group or site admin, not to mention the people that react badly or threaten legal action for doing nothing more than typing the 5 characters, “>foo, in the search box on the frontpage of their site. I always inform people or companies I know, but I tend to be much more wary about getting into these sorts of discussions with unknown parties preferring instead to just stay clear of their services or software.

  4. Hi, Rasmus — you said,

    If the problems were the exception instead of the norm, then I probably would be more dilligent about informing the site owners, or in the case of checking applications, the application authors. But when 9 out of 10 things you look at have problems

    Are you trying to sign up for services (with personal information) at that many sites on a regular basis? If so, well, you’re more prolific than I am at creating accounts.

    But if you are doing it as a test of their security, unrelated to personal information, without their permission, and without the courtesy of after-the-fact notice, then given the (very abbreviated) research I’ve done, it’s not ethical. (Getting into a discussion with them, well, that may not be necessary … but if you can take the time to test them, you can take the time to email them.)

    What are you doing that requires you to check that many sites, which are not your own, on a regular basis? Perhaps there is some circumstance that my essay doesn’t cover.

  5. And to be very clear, I am not an ethics expert. I did a few hours of research and wrote up what I found; if there are other sources out there that do a better job of explaining the issues, and have more and better examples, I am (1) eager to hear about them, and (2) happy be proved wrong (or wrongheaded) according to them. 🙂

  6. I poke a lot of software, so I download it and look through it. Or if it isn’t downloadable I will poke their demo site, if they have one, since as far as I am concerned a demo site is up exactly for the purpose of demoing the software so you can decide if it meets you needs both feature-wise and security-wise. I also poke sites I deal with directly, or on request from friends and family who ask me if it is safe to use a given site, I will do a couple of quick checks just to make sure they are not being conned or I will have to remotely remove a bunch of spyware from their machines over the phone. And in a few cases I poke sites of people I know who I am pretty sure will appreciate the feedback as I am sure a few people reading this can attest to. Hopefully people realize I am not one of the bad guys.

  7. Hi, Rasmus — point by point, answers given relative to the research above:

    “I poke a lot of software, so I download it and look through it.” This appears to be OK; you’re doing it on your own system.

    “Or if it isn’t downloadable I will poke their demo site, if they have one, since as far as I am concerned a demo site is up exactly for the purpose of demoing the software so you can decide if it meets you needs both feature-wise and security-wise.” I agree that if it’s a demo site specifically for the purposes of testing and evaluation, then it appears approval is given implicitly.

    “I also poke sites I deal with directly, or on request from friends and family who ask me if it is safe to use a given site, I will do a couple of quick checks just to make sure they are not being conned” … This one skirts around a little; if it’s your personal data, or personal data under your care, I can understand it, and I think the ethical issue of protecting yourself trumps the ethical issue of approval.

    “or I will have to remotely remove a bunch of spyware from their machines over the phone.” Tell them to get Macintoshes. 😉

    “And in a few cases I poke sites of people I know who I am pretty sure will appreciate the feedback as I am sure a few people reading this can attest to.” And if you have their approval, or if you have a running agreement with them, that would also appear to be just fine.

    Now I gotta ask: do you ever test, or have you ever tested, sites just to see if they’re secure, under circumstances that you have not yet described? For example, a site that is not yours, not owned by someone you personally know, and not in behalf of protecting personal data? I take that back; it’s none of my business. If you want to bring it up, cool, but it’s not my place to ask. Sorry to overstep the bounds of good form.

  8. Personally I think ethical hacking and so on is for the most part an attempt to lump more people into the “evil hacker category”.

    When it comes to security testing I believe that there are primarily two groups of people. First we have those who want an honest answer about if their code/site/app is vulnerable or not. And if it happens to be such, how and why so it can be solved. The other group, which unfortunately is far more populous, simply wants a quick fix solution and a “hacker-safe” stamp of approval that is ultimately worthless. Unless you are trying to mislead your customers and lull them into a false sense of security.

    The second group clearly aware of their half-assed approach to security tries to come up with nifty rules and ‘ethics’ to try to make it more complex for researchers and journalists and even educated consumers to identify and bring light to the problem(s). By preventing problem disclosure they make it seem like their products are “secure”, while the reality of the situation is quite different. In affect it is the same as putting laws in place to discourage whistle blowing.

    Putting limits and restrictions on scanners (people & scripts) is not only silly is stupid, a person seeking to truly do evil is sure as hell not going to care about them. And because of some rules this part of the code was not scanned, you won’t know about the problem until it is too late. By this time you are already doing damage control and undoubtedly cursing the day when you’ve declined a complete friendly scan, however if you wish to call it.

    When companies hire people to do REAL security audits, most people in the company are not aware of this happening. This is done specifically so that the natural environment can be truly replicated, thus simulating as closely as possible, a real attack. If the admins for example are aware of a potential penetration testing going on they may pay more attention to logs then they would during normal times and may spot the attack that otherwise would slip by.

  9. Hi, Ilia — Some rebuttals.

    “The second group clearly aware of their half-assed approach to security tries to come up with nifty rules and ‘ethics’ to try to make it more complex for researchers…” I hardly think that getting approval is that difficult of a goal, especially when it comes to open-source software.

    “Putting limits and restrictions on scanners (people & scripts) is not only silly is stupid, a person seeking to truly do evil is sure as hell not going to care about them.” Ah, but a person seeking truly to do good *is* going to care about them.

    “When companies hire people to do REAL security audits, most people in the company are not aware of this happening.” When companies do this, it’s on *their own machines.* Not someone else’s.

    I understand that we all want to improve security other peoples’ sites. But they’re not ours. Why is the approval requirement such a hard pill to swallow? I get the feeling that readers are confusing the two points. Again: performing security assessments on your own machines is ethical and prudent and wise; doing so on someone else’s machines, without notification or approval, is (according to my limited research) unethical.

    And if you (or anyone else) can find a serious security professional, one who does security assessment for a living, someone like a SANS (http://sans.org) instructor, who states that these “nifty rules and ‘ethics'” are in place primarily to allow security problems to linger, then I’ll be happy to hear him. Or a book from a serious security author who cites references. Your opinions are valuable, but having more facts on the side of “ethics are a misguided fantasy” will make them more valuable.

  10. “I understand that we all want to improve security other peoples’ sites. But they’re not ours. Why is the approval requirement such a hard pill to swallow?”

    I think it’s all about convenience.

    – It’s inconvenient to go through the trouble of requesting permission and waiting for a response. Why wait for all of that, when throwing a couple of XSS probes into a form and submitting it can be done __right now__?

    – What if a person requests permission, gets it, finds a security hole, reports it … then what? What if the site owner/app developer attempts to rope the volunteer auditor into more pro bono work? Wouldn’t that be annoying.

    It’s just easier to post a benign attack and see what happens, THEN decide what to do.

    Of course, convenience usually *is not* a good measure of appropriate behavior. It’s easier to drive through McDonald’s for dinner, but it’s *better* to cook something fresh at home. It’s easier to throw in a couple of lines of XSS tire-kicking … but that doesn’t make it the right thing to do.

    I realize that most programmers are lazy by nature, and that taking the steps to perform ethical testing (pre-approved on an external site, or download/install/test locally) is just too much of a burden. That’s why only one person in this discussion (Chris S) is an actual security auditor for a living.

    If you’re concerned enough to consider unethical testing, yet NOT concerned enough to invest the time in ethcial testing, follow that lazy impulse and just do nothing. Pop off a one-liner email requesting a demo/test installation of the app, or just go find another tool/site/product to use.

    Don’t mistake my opinion for not being interested in, or grateful for, volunteer external audits — on the contrary, one of the reasons I do open source work is to GET that kind of external feedback from people who are using the tools I write.

    All I’d ask is that such helpful contributions of a fresh set of eyeballs be given on a non-production site. If I want a white hat probing my live stuff, I’ll hire one.

  11. Hi, Clay — you have captured the essence of the argument quite nicely. To quote you,

    “…convenience usually *is not* a good measure of appropriate behavior.” Exactly right; it’s convenient for bad and good alike to just go ahead and do a test. The good guy takes it on himself to adhere to a higher standard.

    “I realize that most programmers are lazy by nature … If you’re concerned enough to consider unethical testing, yet NOT concerned enough to invest the time in ethcial testing, follow that lazy impulse and just do nothing.” Also exactly right.

    “…one of the reasons I do open source work is to GET that kind of external feedback from people who are using the tools I write.” Same here.

    Well said, sir. 🙂

  12. On further consideration, it appears that one of the defining questions is, “How can the target of the testing distinguish between a friendly test and an enemy test?” Any question of ethics must be able to answer this; it is a very important aspect, as the target is the one being “used” for the tester’s purposes. The framework I outline (“approval” and “notification”) has a very clear answer: if you don’t have the target’s approval, and if you have not notified the target, then the target cannot tell the difference. Bad guys don’t seek approval and don’t give notice; good guys do.

  13. I posted a very brief response here:

    http://shiflett.org/archive/127

    In a few cases, you have used the phrase “notification or approval” to refer to a single guideline. These are separate ideas – notification is less restrictive than approval.

    I agree with Rasmus and Ilia, and I think the notion that the owner of a public web application can deny the public the right to know whether the application is secure is absurd. This is about as useful for security as the DMCA is for Copyright.

    As I’ve mentioned before, XSS attacks users. Denying users the right to know whether an application has XSS vulnerabilities is very similar to denying me the right to know whether the lock on my apartment works. I don’t own my apartment (I’m a user), but if the lock doesn’t work correctly, I’m the direct victim.

  14. Hi Chris — thanks for taking the time to respond.

    Yes, I do use the phrase “notification or approval” a few times; sorry to be unclear. You are correct that notification is less restrictive than approval; however, note that they occur in two different settings. Notification-only is generally required **within organizations** where approval is implied by attaching to the organization networdl; approval-and-notification is generally required when the system is not within your organization (i.e., almost any web site).

    You say, “I think the notion that the owner of a public web application can deny the public the right to know whether the application is secure is absurd.” My question is, do you mean a “public application” or a “public site”?

    If you mean an “application” and it’s downloadable, then you are well within ethical bounds to download and install on your own system for testing.

    If you mean a “site” then, to use your analogy, that appears to be more like testing locks in an apartment complex that you are thinking of leasing from but do not live in.

  15. For your analogy to work, my research would have to target other users. While this is possible, it is outside of the scope of what I’m talking about. I think Rasmus mentions this in an earlier post.

    Yes, other users of the application (I use public application to be more specific than public site, as the latter can technically be a collection of static pages) can benefit from my research, but it’s not because they are likely victims of it.

  16. For my analogy to work, you could be testing the lock on an empty apartment. 😉 And for yours to work, you would actually need to have leased the apartment in the first place; e.g., to have downloaded the application to your own system.

    Or perhaps it fits better with Rasmus’ point about personal data; if you’re doing minimalist testing to protect yourself, I can see that. It skirts the line a little, but protection of personal data trumps other stuff. That’s provided you’re actually protecting personal data, not simply testing for “research.”

    (It seems like all the analogies break down pretty fast. 🙂

    Really, I fail to see how providing notification (at a minimum) is such a difficult thing to do. At the very least it’s good manners. Getting approval does seem to be the gold standard, though; you don’t **have** to do it, but if you don’t, you don’t seem to be operating within ethical bounds. Practical and convenient bounds, but not ethical ones.

    If you want “research” to be seen less as malicious and more as helpful, then continuing to deny the wishes of your research subjects might not be a good a way to improve the image of “research.”

  17. Hey, here’s another idea: if you request approval, and they deny it, you can publish a notice to that effect. “We contacted Site Owner X and asked if we could do security testing of A, B, and C. He denied our request. As such, we recommend that users avoid this site until the owner submits to reasonable testing.” Perfectly within ethical bounds.

    Of course, if it’s an application that you can download, you’re scot free; just download, install, and test on your own system. Disclose the vulns to the application developer(s) and after a reasonable period (depending on the vulns) report to public channels.

    How is that a bad thing?

  18. Rebuttal to a rebuttal 🙂

    A good chunk of my day to day business that pays the bills so to speak involves during security audits and assessments for web based software. In my line of work I’ve dealt with both small ma & pa operations and even fortune 100/500 companies. The claims you and clay put forth indicating only Chris S. is a security expert are laughable and only show your lack of knowledge. While I cannot speak for Rasmus, I know that in this line of work he has done plenty of security auditing and testing and is probably just as accredited as anyone to be considered as a security expert. At the very least a security expert on PHP/Web based apps.

    Now as far as testing goes, consider the following situation, you visit your bank late at night and as you pass the rear entrance notice that the door is open. This happens more then once, so on your next trip during the day hours you tell the manager about this problem. Assuming extreme ineptness, which would not happen in any bank I know the manager simply waves of the dangers. Now you have 2 choices, stop dealing with the bank, and move to one more attuned to security issues. Or stay with a bank but phone your friend the journalist who’ll break that story and embarrass the bank into solving this problem.

    Now let’s put this into XSS perspective, more then one company, developer seem to consider XSS a harmless bug not worth spending the resources to solves until they see its possible affects. I am not talking about test things like , but real danger like injection of JavaScript code sending cookies to 3rd parties, iframes loading competitor sites etc… One such demonstration is usually enough to convince even the staunchest of non-believers, in effect the same as putting the story out to general media.

    Another thing to consider is that most XSS tests can be done without doing any harm, where the attack only affects the viewer, but in some cases XSS is stored. Like in blog messages, forum posts and wiki notes. In many cases this can be still done “safely” thanks to the preview button, which does not actually submit the message. But even that is not always possible, as is the case with your wiki for example. This means that the only way to test for a problem is to try and see what happens.

    Now ideally testing would occur on your own server, or on a server of a person who has asked for a scan (assuming you are reasonably certain that this is their server). I wouldn’t test for stored XSS on a server of a 3rd party, given the potential affect of the XSS on other users of the site. However, I don’t see a problem of testing for non-stored XSS, where potential vulnerability would only affect me, the tester. In affect it’s the same thing as testing if a door is locked, as long as you don’t open it even if it unlocked no crime has been committed. As far as telling the owners, usually I would, but to my great surprise and disappointment well over half of notified parties choose to ignore it, often replying with a “thanks for the e-mail, but we don’t consider this to be a problem”. There were a few that did fix the problem, a few times within hours and my kudos goes to those people for taking security seriously.

  19. Hi, Ilia —

    Re-rebuttal to the rebutted rebuttal. 😉

    First, you say “The claims you and clay put forth indicating only Chris S. is a security expert are laughable and only show your lack of knowledge.” I don’t know about “laughable,” but I have already claimed **not** to be an expert. I did a day of research and the article discusses what I found; my commentary is clearly not all-encompassing. Part of why this discussion is important, is that it is a learning exercise for everyone who is not as experienced as you (and other security experts) are. I am learning as I go, same as most everyone else reading this.

    You talk about the techniques of testing, in regard to yourself and Rasmus and others, and I have no doubt of your skills. Certainly you are far better at that kind of thing than I am. But those technical skills are not what is in question.

    What is under discussion is the ethics of testing; that is, in what manner someone should behave when they want to perform a test, when it is proper to perform testing, and when it is not. For example, is it ever proper **not** to perform a test? If it’s always allowed, what separates an ethical tester from an unethical one in the eyes of the target (or even an impartial third party observer)?

    Regarding the bank analogy: what if it’s *not* your bank? What if nobody you know banks there? What if you have no intention of becoming a future customer? Is it ethical to test their doors after dark, and not tell anyone that you were going to do so, and not tell any of the managers if you found them open? I argue that this is a more apt analogy when it comes to testing sites that are not yours, simply for purposes of “research.”

    Regarding cases where the only option is to “test for a problem … to try and see what happens” — yes, sometimes that is your only option. In those cases, what prevents you from downloading the software to your own system and testing it there? Why **must** you test it on the other person’s site, other than for reasons of your own convenience?

    You follow with, “Now ideally testing would occur on your own server …” I agree. Would it not be more ethical to try for the ideal first, and then fall back to the less-ideal? It’s less convenient, perhaps, but then ethics is not primarily about convenience.

    Finally, you say, “As far as telling the owners, usually I would, but to my great surprise and disappointment well over half of notified parties choose to ignore it…” Truly that is shameful and dangerous. Perhaps you should consider publishing the names and URLs sites that have refused to cooperate, so that the rest of us may avoid them. The “usually” bothers me; if you can take the time to probe for new information, certainly you owe disclosure of that information to the target you probed.

    To retiterate one more time: I am by no means an expert; I am just now getting into the ethics of security testing. But nobody in the past three days has given answers I find to be defnitive or satisfactory. So far, all of the arguments I have heard against the “approval” framework seem poorly explained at best, and distracted or self-serving at worst; the analogies all seem improper, inaccurate, or flawed in some fashion.

    It seems like everyone against the “approval” framework is saying “I want to test other peoples’ sites whether they like it or not, and they can’t stop me!” Well, it’s true that they can’t stop you. But is it ethical to do so without their approval? If so, why? Is altruism truly your solitary motivation? (Again, note that I am talking about sites and systems belonging to others, not the applications which run those sites; if an application can be downloaded for testing, certainly it is ethical to test it on your own site or system.)

  20. Very long post as you’ve said Paul 😀 I dunno why for a long time XSS was not considered as serious bug , I think we still in the stage of fixing bugs not yet to prevent them. I know many websites that have been contacted for security issues but they never respond, neither fixed any bug. I think they keep saying that its one small bug for man, one great program for mankind 🙂

    And in fact, XSS is not always exploitable. I have s small code that test for possible XSS bugs, the only information that gave me is “the quality of the code”. And usually commercial scripts are full of XSS bugs (never find on mine), I dunno why ! but from hundreds (sometimes thousands) of XSS bugs, trying to guess which one is exploitable its a long story. I prefer choosing “one great program for mankind” approach.

  21. If I am considering becoming a customer of a financial institution and I discover their security is lax (open doors late at night while one’s in) then yes, I probably will not become their customer. Is it wrong of me to check the bank’s security practices, hardly, this is all about being an educated consumer.

    When the software can be downloaded and tested locally, that is definitely the way to go. But that is not always possible, in fact it is generally only applicable to OSS software and even then not always.

    As far as testing methodology & ethics, I’ll re-iterate my previous position. If you do decide to do unapproved testing, follow the “do-no-harm” mantra. Which means testing for “stored XSS” where the XSS will affect other visitors of then target site is definitely out. However, testing non-stored XSS, which would only affect the testee is ok, since it does not affect the site’s operation in any way shape or form. I could see how someone could successfully argue that or something like that does no real harm, but I am not going to defend that position.

    As far as “owning” disclosure that’s just laughable. I agree that would be the nice thing to do, but it is hardly an obligation. I’d say your obligations if you do find vulnerability would be more a long the lines of:
    1) Don’t disclose it to 3rd parties for any reason, be it fun or profit.
    2) Do not use it for any nefarious purposes

  22. Hi Ilia — I don’t mean to put words in your mouth, so I apologize in advance.

    When I read your reply, it sounds to me like you’re saying you have no responsibility to the owners of the sites you test for vulnerabilities. Essentially, I hear you saying that you get to do what you want, and as long as you know your intentions are honorable, then you’re being ethical.

    The problem I have with this is that it’s not possible, as the target, to distinguish your (self-identified) benign behavior from the behavior of someone preparing for more nefarious attacks. To go back to the bank metaphor (which sucks, frankly, but we’ll stick with it for now). if the guard watching the doors sees you come up and start trying to open it, or even to try opening different doors at different times of the day, how does he know you’re not a bad guy?

    I argue that for you to be seen as a good guy, you need to state your intentions to the target. “Do no harm” is a good thing, but unless you tell the target what you are doing, he has every reason to believe that your activity is likely to be nefarious.

    Not to be rude, but I think it’s “laughable” that you go to the trouble of testing a site, and then not take two minutes to tell them by email that you found a problem. Certainly an “ethical” tester holds himself to a higher standard than “oh well, it’s their problem — let’s try the next site!” If your goal is to improve security, not-telling them does not help to meet that goal.

  23. I do not say that the tester has no responsibility to the owners of the sites being tested, that is simply incorrect. The tester has the responsibility (unless testing with explicit permission) to ensure that the tested site and it’s visitors are not harmed in any way by the test. They also have the responsibility to keep the results on their findings secret until the time that the problem is resolved or if the tested party is notified and fails to act upon it within a reasonable time (I believe the industry standard is 1 month).

    What I am saying is that the tester has no obligation to inform the site being tested of their findings.

  24. Hi Ilia — regarding my take on your earlier statement with respect to responsibility: my mistake, and thanks for clarifying. 🙂

    However, if testers keep their findings secret from the target, how is the problem to be resolved? I say if you find it, you have an ethical obligation to tell them. Other than for reasons of personal inconvenience, why would you not?

  25. I am not saying tester should keep their findinds to themselves, and not tell the site being testing about the found problems. However it is not a MUST, so if the tester does not feel comfortable telling the site operator about a problem, they are by no means obligated to do so.

    Developers should not rely on kind hearted users to find and report bugs for them. Periodic security audits and code reviews are the way to keep security holes from appearing.

  26. Hi, Ilia —

    “Periodic security audits and code reviews are the way to keep security holes from appearing.”

    I completely agree; developers should regularly request and approve audits and reviews. However, you are not operating ethically when you take it on yourself to perform one on someone else’s site or system without notification and approval. Certainly you may suggest it to the site owner, but to follow through without approval, or at the very least notification and full disclosure, is outside the bounds of ethical conduct.

  27. And regarding your comment, “…if the tester does not feel comfortable telling the site operator about a problem, they are by no means obligated to do so.” — I’m calling bull on that one. If a tester is “comfortable” testing without approval, he should feel “comfortable” enough to report his findings to the system operator/owner.

  28. I guess this is something we don’t see eye to eye on. Some companies, developers and admins often are more intent on sweeping security issues under the rug rather then fixing them. So, their “approach” to security is to persue people who try to help them identify those problems rather then fixing them. Thus reporting problems becomes a rather risky proposition. A few Google searches will find you plenty of evidence to this affect…

  29. Hi Ilia — “I guess this is something we don’t see eye to eye on.” I guess so. Reasonable people can agree to disagree (and you at least are a reasonable person, there may be some debate about me ;-).

    “Some companies, developers and admins often are more intent on sweeping security issues under the rug rather then fixing them.” Which drives me nuts; ultimately, it’s self-destructive behavior, and takes more effort than just fixing the problem.

    However, that’s their call, and outside the control of testers; my point is that a tester’s behavior is his own to control, and he should behave ethically as described above. That way you avoid the entire problem of being pursued for discovering flaws.

  30. Since its talking about ethics i was wondering

    Consider that I visit a website and click somewhere on their pages then see a security bug, it could be sources or anything, Am I considered as an innoncent visitor or an evil hacker ?

  31. Hi, Hatem —

    Were you attempting to discover a vulnerability? Then no. 🙂 If you stumble across it in the normal course of using the site, then you’re clear.

Leave a Reply

Your email address will not be published. Required fields are marked *