lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date: Wed Aug 10 01:20:34 2005
From: dsi at iss.net (Ingevaldson, Dan (ISS Atlanta))
Subject: "responsible disclosure" explanation (an
	exampleof the fallacy of idealistic thought)

Just in case anyone is interested, the ISS Vulnerability Disclosure
Guidelines were made public a couple years ago, and last revised on July
15, 2004.  The document is available here:

http://documents.iss.net/literature/vulnerability_guidelines.pdf

Regards,

------------------
Daniel Ingevaldson
Director, X-Force PSS
dsi@....net 
404-236-3160
 
Internet Security Systems, Inc.
Ahead of the Threat
http://www.iss.net
 

-----Original Message-----
From: full-disclosure-bounces@...ts.grok.org.uk
[mailto:full-disclosure-bounces@...ts.grok.org.uk] On Behalf Of Matthew
Murphy
Sent: Tuesday, August 09, 2005 2:43 AM
To: full-disclosure@...ts.grok.org.uk
Subject: Re: [Full-disclosure] "responsible disclosure" explanation (an
exampleof the fallacy of idealistic thought)

Let me just define "responsible disclosure" first of all, so as to
dissociate myself from the lunatic lawyers of certain corporations
(Cisco, HP, ISS, et al) who define "responsible disclosure" as
"non-disclosure".  The generally accepted definition of responsible
disclosure is simply allowing vendors advance notification to fix
vulnerabilities in their products before information describing such
vulnerabilities is released.  The overwhelming majority of researchers
put a ceiling on what they consider "responsible" timelines on a
vendor's part, but these vary widely.

Jason Coombs wrote:

> "responsible disclosure" causes serious harm to people. It is no 
> different than being an accessory to the intentional destruction of 
> innocent lives.

You seriously overstate the facts here, as a minute number of software
vulnerabilities pose any threat to human life.  In the cases where a
software flaw could potentially be responsible for the loss of an
innocent life, the greatest error is still one in human judgment.

> Anyone who believes that "responsible disclosure" is a good thing 
> needs to volunteer their time to teach law enforcement, judges, 
> prosecutors, and attorneys that the consequence of everyone 
> communicating with everyone else online is that some people use secret

> knowledge of security vulnerabilities to ruin other people's lives or 
> commit crimes by hijacking innocent persons' vulnerable computers.

You manage to draw absolutely no parallel between these two, so I'll try
and draw one for you.  Limiting knowledge of vulnerabilities to any
select group (no matter who they are) is a bad idea, because it
necessarily renders the uninformed incapable of self-protection.

In reality, this theory is denied by historical evidence, and stands
directly opposed to virtually all actions of modern law enforcement.  
I'll even use the analogy of a person moving illegal material (we can
even say child porn, for simplicity's sake) to show you why your theory
of disclosure is irreparably flawed.  Say I discover a weakness in the
security measures of an airline, that allows me access to passenger
luggage after it has been screened.  Clearly, the implications include a
direct threat to human life: the scenario of explosives hidden in
checked baggage is a very real threat.

Do I announce over the public address system that an airline's screening
procedures are weak and easily defeated, and reveal the exact steps
necessary to do so?  Of course not!  It's an engraved invitation to
every terrorist on the planet to exploit said weakness.  You inform the
people responsible for the fault that caused the issue in question (an
airline supervisor, for instance) and they fix it.  Should they fail,
you inform the public, and counter-measures are taken based on that
airlines delinquency.  This may include flight prohibitions on the
airline, for example.

In this scenario, much as a software vulnerability, two factors are
consistent.  The threat (the malicious individual seeking to move things
illegally or harm life or property) is fixed, as is the vulnerability
(the weakness that allows that individual access).  The only component
of the puzzle that is not static is the actual risk of the threat
becoming reality (exploitation of the vulnerability).

In the researcher's case, it is necessary to balance the potential
increase in threat posed by the possibility that malicious individuals
may be aware of the vulnerability already and planning to exploit it,
with the actual increase in threat posed by informing said malicious
individuals intentionally in the guise of "public safety".  Fact is, in
the current environment, the risk of exploitation is significantly
increased by general knowledge of a flaw.  Before your ideal can become
a reality, we need several improvements.

1. Reach: Security information must reach an overwhelming majority of
the product's user base, presumably all of the users affected by a given
product flaw.

2. Timeliness: The speed at which protection must reach the user base
needs to be improved.

3. Effectiveness: Software, systems, and processes must be designed in
such a way as to make immediate delivery of vulnerability information
100% actionable.  Side effects must be significantly reduced (and
eventually eliminated) to ensure that deploying interim protection is
practical, as well as possible.

Process and standardization are instruments that benefit consistency,
clarity, and quality of information, sometimes at the expense of speed.

Add to that the fact that public safety initiatives in any industry are
almost always handicapped (or even crippled) by the neo-socialist
regulatory frameworks for security issues in most Western nations that
attempt to strip almost all flexibility away from any security issue.  
To avoid this problem would mean the annihilation of most of the theory
behind Western law, which tears a page out of the socialist playbook
with its theory that the government has an obligation to protect the
individual, and that (in most cases) the individual has no obligation
(or even right, in some cases) to protect himself or herself from harm.

> So you tell me, those of you who believe that "responsible disclosure"

> is a good thing, how can you justify holding back any detail of the 
> security vulnerabilities that are being used against innocent victims,

> when the court system that you refuse to learn anything about is 
> systematically chewing up and spitting out innocent people who are 
> accused of crimes solely because the prosecution, the judge, the 
> forensic examiners, investigators, and countless "computer people"
> think it is unrealistic for a third-party to have been responsible for

> the actions that a defendant's computer hard drive clearly convicts 
> them of?

Given your particular example, I feel not a bit of guilt.  It's obvious
that the legal system isn't an effective instrument in dealing with
high-tech crimes, as it stands.  But the solution to incompetent experts
is to hire competent experts, not offer "experts" more information when
they can't grasp what's in front of them today.

> You cannot withhold the details of security vulnerabilities or you 
> guarantee that victims of those vulnerabilities will suffer far worse 
> than the minor inconvenience that a few companies encounter when they 
> have no choice but to pull the plug on their computer network for the 
> day in order to patch vulnerabilities that they could otherwise ignore

> for a while longer.

The point you miss is that by withholding vulnerability details, I
guarantee nothing, other than that those details are less widely known.

I agree that patch processes should be more expeditious, but the
solution to that dilemma is not to force companies to sacrifice quality
by creating an imminent risk that did not otherwise exist.

> "Responsible disclosure" is malicious. Plain and simple, it is wrong.
> "Responsible disclosure" ensures that ignorance persists, and there is

> no doubt whatsoever that ignorance is the enemy.

Given that full disclosure really *does* guarantee that exploit details
are more likely to be acted upon by malicious parties, I'll run the risk
of ignorance persisting for the matter of weeks that I allow most
vendors to produce a patch.  We're not talking about waiting for hell to
freeze or the vendor to patch, and choosing whichever comes first.  
We're talking about giving the vendor the opportunity to offer users
more options to protect themselves... typically in the form of a
software update or something similar.

> Therefore, supporters of "responsible disclosure" are the source of 
> the enemy and you must be destroyed. Hopefully some patriotic hacker 
> will break into your computers and plant evidence that proves you are 
> guilty of some horrific crime against children. Then you will see how 
> nice it is that all those "responsible" people kept hidden the details

> that you needed to prevent your own conviction on the charges brought 
> against you by the prosecution.
>
> How can "responsible" people be so maliciously stupid and ignorant?

So far, I see nothing, other than your radicalism in attempting to link
them, that ties vulnerability disclosure to the example you provide.  Do
I need to know the exact technical details of a vulnerability in order
to know that my system has been compromised?  Of course not, or you'd
never *hear* reports of "in-the-wild" exploits being caught and
analyzed.

> Please, somebody tell me that I'm not the only one inviting judges to 
> phone me at 2am so that I can teach them a little about why a Windows 
> 2000 computer connected to broadband Internet and powered-on 24/7 
> while a member of the armed forces is at work defending the nation 
> could in fact have easily been compromised by an intruder and used to 
> swap warez, pirated films and music, and kiddie porn without the 
> service member's knowledge.
>
> How can trained "computer forensics" professionals from the DCFL and 
> private industry author reports that fail to explain information 
> security? The answer is that the people who teach computer forensics 
> don't understand information security. It is not "responsible" to 
> suppress knowledge of security vulnerabilities that impact ordinary 
> people. Suppress security vulnerability knowledge that impacts only 
> military computer systems, but don't suppress security vulnerability 
> knowledge that impacts computer systems owned and operated by ordinary

> people; for doing so ruins lives and you, the suppressing agent, are 
> to blame for it moreso than anyone else.

These last two points tie perfectly into my previous statement on the
subject.  It's obvious that most legal experts don't have a clue, and
aren't learning from the information that is already available.  So why
do I have any obligation to give them more for them to ignore?  Of
course, I don't have any.

And last, one point that everyone misses in this battle of full vs. 
responsible disclosure.  Most people define full disclosure as (or at
least include in its definition) the revelation of all available
technical detail on a vulnerability, or at least the level of detail
required to reliably reproduce the issue.  You include in this, by
virtue of your exclusion of responsible disclosure practices and broad
claims to the "right to know", any limited disclosure.  You also, by way
of your right to know claims, require that the researcher make effort to
inform all affected users.

So, by your standard of full disclosure, we have a policy of revealing
vulnerabilities that is:

a) immediate
b) provides complete detail
c) all-encompassing

Now... provide me a forum that *actually* meets this standard.

Look very carefully at that last requirement.  There is no forum, in
today's world, that can effectively reach all users that may be affected
by a vulnerability.  Even if you believe it to be an obligation of the
public to seek information regarding vulnerabilities, you must
acknowledge that the hodge podge of sources for security information
(many of which have conflicting mandates and objectives) is today, one
of the largest roadblocks to effective vulnerability management.

In acknowledging this, you realize that today, "full disclosure" creates
an absence of conscience, furthered by the ignorant belief that users
will, magically, all be aware of the vulnerability.  Essentially, that
concept of disclosure alters the informed community from a community of
individuals most suited to remedy the vulnerability in question to a
community of individuals most devoted to scouring the numerous
poorly-documented, badly-marketed and incomplete sources of
vulnerability information that is "publicly available".

Perhaps the subject tags on this mailing list would be more accurate if,
instead of:

    [Full-Disclosure]

they read:

    [The-Best-Attempt-At-Full-Disclosure-That-We-Could-Conceive]

I know I'll probably take more than a few flames for this, but idealism
in solving problems is never effective.  For the very root of idealism
is a world free of challenge... an environment that will never be
realistic in my lifetime. 

Regards,
Matthew Murphy

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ