Sunday, April 25, 2010

I am a narcissistic vulnerability pimp

The Verizon Business Security Blog has drawn the line in the sand of the kitty litter box they're apparently playing in, labeling those who irresponsibly disclose "information that makes things less secure" as narcissistic vulnerability pimps.
Time to pull those iPhone wannabes from betwixt the Verizon lily whites and dial 1-866-GET-CLUE.
I love it when risk "experts" start sounding off about that of which they know nothing.
As members of the Verizon Risk Intelligence group, clearly an oxymoron, Wade Baker and Dave Kennedy must be the same guys who describe risk level in the cloud as .4.
Here's a secret.
Vulnerability disclosure is, as Robert Graham says, rude at its core.
"Hey Mr. Vendor, your code sucks, fix it."
But what about when Mr. Vendor decides to blow off the security researcher who tried on numerous occasions, via numerous channels to disclose a vulnerability?
So when that security researcher goes public after vendor FAIL, he's now a narcissistic vulnerability pimp?
Is Charlie Miller a security researcher or a narcissistic vulnerability pimp? (d2d) recently summed this conundrum up succinctly:
"So what is actually responsible or ethical? The lines are blurred quite a bit. The "responsible" method is also the "painful", "expensive", and often "ineffective" method that gets little resolved for exponentially more work, time and money. Is all that waste not irresponsible? What about all of the other organizations unknowingly affected by things I've found, organizations who never got a heads-up, no less a patch, because my attempts at "responsible" disclosure failed?"

No matter how you define it, disclosure drives vendors to repair code they would otherwise neglect and leave vulnerable for the real criminals to exploit.
Yes, risk may be elevated in the time period between disclosure and repair, as well as repair and patch deployment. But if researchers wait on the likes of Oracle to fix their kluges, nothing would ever get fixed.

Dan Goodin says in his writeup, "as the recent Pwn2Own contest made abundantly clear, software makers can't be counted on to secure their products, at least not on their own."
Dan clearly be styling some bling of his own.

I at do hereby resolve to faithfully ignore Verizon Risk Intelligence dreck forthwith.

Cheers. | digg | Submit to Slashdot

Please support the Open Security Foundation (OSVDB)


Anonymous said...

Hi, Russ -

Love the passion. Make no mistake - web-based vuln disclosure is slightly different than traditional software. However, I think it might be worth considering exactly what the end game is and how well things are (or aren't) progressing. Perhaps most importantly, is your goal to reduce vulnerabilities or reduce the number of incidents associated with some software?

Pete Lindstrom

Russ McRee said...


While web app vulns are indeed considered "less than" by the industry at large (understandably so), they're also the low hanging fruit left for hacker pickings.
Thus the end game is simple: reduce both the number of vulns, and related incidents.
I'll give you an example from experience.
Where development environments have not embraced some semblance of a security development lifecycle (SDL), their product incident counts are always, inevitably off the charts as compared to environments that do make use of an SDL or something similar.
Even the most basic premise of static code review and pre-production application testing will cut incident numbers by 75% or better.
Eliminate or reduce the likes of SQLi, XSS, CSRF, etc. and consumers are thereby safer as they "consume". Simple.
As legacy computing practices continue to move from classic architecture to web app, cloud offerings, this simple premise will continue to become more important.
Web provider standards need to be higher and I aim to keep trying to expect more and increase awareness.
If it takes disclosing a finding, after all other resources have been exhausted, in a manner that falls in what Dave Kennedy has labeled narcissistic vulnerability pimp, then so be it.
The same holds true for client based apps, operating systems, etc.
Sometimes there is no other choice when vendor neglect is the only outcome. Do real NVPs exist? Sure, but they are a minority.

Anonymous said...

@Russ -

I didn't mean web vulns were "less than," I meant the economics and decision process were different from traditional software. For example, "patching" a website provides complete protection - something that providing a patch to client-side software, for example, never can.

I am having a hard time parsing the notion of "product incidents being reduced by 75%". That sounds like vulnerabilities to

Just to be clear, the "incidents" I am talking about are exploits by bad guys resulting in a C-I-A violation.

Can you clarify?



Russ McRee said...

Indeed, the economics and decision processes are different from traditional software, but so too are the exposure levels.
A SQLi vulnerability in the likes of resulting in a data breach would of very different significance than a Windows Media player vuln that could result in privilege escalation on victimized systems (where victimization requires stupid user tricks, i.e. URL clicks).
All the victims in the imaginary Salesforce scenario are entirely innocent. They're simply the victims of an exploited web application code flaw (no interaction required on their part).

When I refer to incidents I mean specifically one of two things.
As an example, consider an major name-branded, 3rd party developed site in an EU market. The site was developed for the major brand by the 3rd party, and was subject to less security review as a result.
The site is extremely popular and as such is always under scrutiny by security researchers or criminals.
Either way, should said researcher or criminal discover and/or exploit a vulnerability, the net result is likely to be a declared incident.
One scenario, where the researcher responsibly or irresponsibly discloses the vulnerability, there is an immediate incident response and the vulnerability is mitigated and remediated.
The criminal scenario could result in a defacement, and/or compromise of the server and db contents, depending on the vulnerability exploited.
Again, there is an immediate incident response and the vulnerability is mitigated and remediated.
Linearly, this is best represented as: vulnerability-->exploit/discovery-->incident-->remediation/mitigation.
As a result of either of these scenarios, should the 3rd party development team then resolve to, or be required to employ an SDL or its equivalent, their incident count starts an immediate downward trend, as vulns are identified and eliminated proactively.
I have seen the 75% or better reduction as part of job duties with just such scenarios.
Linearly, the SDL modified representation is: vulnerability-->proactive discovery-->remediation/mitigation/prevention.
Does that clarify?

Anonymous said...

"Does that clarify?"

Well, a bit, I think. I suppose we should focus solely on Website vulns since that is your domain and it is very different from traditional software in the disclosure world. I also believe it is harder (that is, disclosure likely has a better case with Web vulns than with software).

First, with respect to incidents, I am talking about known compromises due to vulnerabilities, not possible compromises that you consider highly likely based on your assessment of the threat and the knowledge of a vulnerability. They are different things. I would say the latter, which is what I believe you ascribe to, is simply a vulnerability, not an incident.

So it sounds to me like what you are saying is that web developers can reduce their vulnerability count by 75% if they employ an SDL. Is this correct?

If my assumption is correct, then you are focusing on the vulnerability side of the risk equation. It seems like a reasonable approach and many security researchers are advocates (witness all the attention on "more secure software").

However, I believe you can get a level deeper if you consider the reason WHY you want more secure software. Usually, it is to reduce risk of compromise - real life, C-I-A style incidents where bad guys do something nefarious.

Hopefully you'll come to the same conclusion (I can't think of a reason to have more secure/less vulnerable software if it isn't to reduce incidents). So then we get to the question of whether the total number of incidents associated with a website is likely to increase or decrease based on the disclosure of certain specific vulns (the specific case), and in the aggregate whether this holds true (the general case).

Here's my problem: I think there are too many websites and too much code with too many vulnerabilities to successfully mitigate everything to a point where you can legitimately assert that you are reducing the number of incidents. The increase in threat-risk due to disclosure far outweighs the reduction in attack surface.

This is why I believe that disclosure does more harm than good.

I have to say that I am not completely sold on my own argument on the Web-specific vulns, but it is definitely the direction I am leaning in. As I've mentioned, I believe this is much more clear cut with traditional software because the aggregate case is easier to see.

Richard Bejtlich said...

Hey Russ,

I saw that too and decided not to say anything. Dave Kennedy is too often either sarcastic or clue-impaired for me to take him seriously.

Russ McRee said...

@Anonymous (Pete)

I am indeed saying that web developers can reduce their vulnerability count by 75% or better if they employ an SDL (coupled with proper change control).
I have statistics that validate that if development teams fall in line and adopt standards, their vulns, exploits, incidents, etc. are greatly reduced.
Nothing is perfect but I have the numbers to validate my claim.
But, for the time being, I must being generic as my employer would not be happy with me being were I to be specific. ;-)

You said:
"I think there are too many websites and too much code with too many vulnerabilities to successfully mitigate everything to a point where you can legitimately assert that you are reducing the number of incidents. The increase in threat-risk due to disclosure far outweighs the reduction in attack surface."

I answer this one with "use global mitigations, not one-off repairs". Tokens for every unique request to the app (CSRF), encode all output, limit and sanitize input (XSS), use stored procedures or parametrized queries and again sanitize input and limit (SQLi), etc.
If development teams use available libraries (Anti-XSS, OWASP) and frameworks (ASP.NET MVC, etc) to aid in the cause, they can greatly reduce the surface.

Yet, the problem is most development shops aren't likely to be that diligent unless they're big, well funded, and have a lot at stake.
Smaller shops and web app project teams (CMS, shopping carts, BBs, forums, etc) will offset that diligence on scale, thus supporting your belief that there is "too much code with too many vulnerabilities to successfully mitigate everything to a point where you can legitimately assert that you are reducing the number of incidents"
I can assert my claim in individual organizations, but certainly not Internet wide.

You make strong points and I appreciate the exchange.
I still think disclosure in any form is likely to increase the odds of a fix, but as we know, there is then dependency on people to apply the fix, and we know for a fact what those numbers look like (not pretty).
One need only hunt about via SHODAN to catch my drift there.

Oh, the dilemma. ;-)


Moving blog to

toolsmith and HolisticInfoSec have moved. I've decided to consolidate all content on one platform, namely an R markdown blogdown sit...