lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
From: cstefani at tideworks.com (Colin Stefani)
Subject: Re: it's all about timing

Further, I don't trust that any part of the disclosure process is going to
be safe in the future until more specific and defined legal definitions are
in place, even a law. IETF, CERT, a community standard and any rfc aren't
going to protect against some company feeling they were made to look bad and
then decide to sue you or come after you with DMCA.

I heard an interview with the White House "Cyber" Security head on NPR this
morning and listened to his encouragement of "hackers" to keep hacking, but
don't disclose until it's "appropriate", with no mention of what that meant.
He also mentioned that it's ok only for "security professionals" to hunt for
and report security problems in software, but it's not ok for the common guy
to do it because then the law sees them as malicious. What is the criteria
for a "security professional"? If the government/white house is going to
push policy that is this abigious in nature, then I need a clearer
definition from our government as to what is ok and what isn't. Because it's
certainly not worth risking your ass so some company can feel better about
protecting their shitty code.

The scope of this disclosure problem goes beyond "when it's appropriate" to
disclose information, in my eyes it's becoming and will become, an issue of
what protections do you get if you find a security flaw and report it to the
company? How about if and when you disclose it? Currently, there are reports
of threats of legal action against people who simply went to a company and
said "here I found this problem, you should fix it". You can follow all the
process and commonly agreed upon "standards" for disclosure you want, but
until there is a clear legal definition and protection, there will continue
to be scare tactics and threats from companies who feel you were being
malicious instead of trying to help.

If there's going to be legal ramifications for finding a security flaw in
company X's software (or for example HP's :-) then I say screw them. If the
cost of my *free* analysis and my time is that I might get fined and thrown
in jail, or least have to hire a lawyer to protect myself, then I'll stop
helping them. I'm willing to guess this might be other's reactions as well
if things progress to the point where some companies are going after people
finding flaws in their software. I don't owe any company anything. I like to
do security analysis and find these problems I want to help, but not at the
cost of losing my freedom or money.

I know HP was trying to protect the security of their systems by threating
legal action should the problem be disclosed before they can react to it.
But really, those tactics serve no one and in end alienate an entire *free*
security anaylysis community who was doing *free* work for them.

I know there are many other aspects to this issue, but that's my $0.02,

-cs

-----Original Message-----
From: Georgi Guninski [mailto:guninski@...inski.com] 
Sent: Thursday, August 01, 2002 6:04 AM
To: full-disclosure@...ts.netsys.com; Bugtraq
Subject: Re: [Full-Disclosure] Re: it's all about timing


IMHO the threats against Snosoft are FUD, even more FUD than the Sklyarov
FUD. I 
personally don't expect any court.

What scares me is that the "Responsible Disclosure" FUD continues. On
bugtraq people write that CERT and SecurtyFocus are "established parties"
and 
everyone who does not give them their 0days is irresponsible (at least CERT
is 
known to sell 0days). I personally won't give them my 0days early.

The "Responsible Disclosure" draft continues to get advertised, though it
was 
not approved by IETF.

Why people think about giving away the right of free speech just because of
some 
FUD?

Even in the unlikely case if this bad rfc pass, does it mean that that
people 
are safer when they disclose problems - definitely don't think so.

So the facts are that some companies can't write secure code and it is more 
expensive to write secure code.

Just check "Help -> About" on Windows before using the word
"responsibility".

The easiest solution is to shoot the messenger and to outlaw saying the
emperor 
has no clothes. But this won't fix the problem in the real world. IMHO such 
regulations will only alienate a lot of people and will make things worse.

----
When I answered where I wanted to go today, they just hung up (Unknown
Author)


Steven M. Christey wrote:
> The Responsible Disclosure Process draft specifically allows for 
> researchers to release vulnerability information if the vendor is not 
> sufficiently responsive.  Some people may disagree with the delay of 
> 30 days between initial notification and release, but I don't think 
> there are good stats on how long it really takes vendors to fully 
> address vulnerability reports - open or closed source, freeware or 
> commercial.  Let's take a recent example - how much coordination had 
> to happen for the zlib vulnerability?  It seems reasonable to assume 
> that it took more than a day.  And the controversial "grace period" 
> has the interesting distinction of being used by both Microsoft and 
> Theo de Raadt.
> 
> Researchers can help to shed light in this area by publishing 
> disclosure histories along with their advisories.  (By the way, vendor 
> advisories rarely include such information.)
> 
> While the response to the proposal focused almost exclusively on how 
> it impacts researchers, it lays out a number of requirements for 
> vendors, primarily that they (a) make it easy for people to file 
> vulnerability reports, (b) be responsive to incoming vulnerability 
> reports, and (c) address the issues within a reasonable amount of 
> time.
> 
> IMHO, it makes a stronger impression when someone releases a security 
> advisory with an extensive disclosure history that says how much they 
> tried to resolve the issue with the vendor, before they released.
> 
> Those who are interested in the legal aspects of "responsible 
> disclosure" are encouraged to read the article by Mark Rasch at 
> http://online.securityfocus.com/columnists/66.  The article basically 
> says that the adoption of community standards could protect 
> researchers who disclose issues responsibly, while it could also help 
> vendors who seek legal recourse against researchers who are not 
> responsible (for some definition of "responsible").  The former could 
> happen with a community standard.  The latter may already be happening 
> without one.
> 
> This email is my personal opinion, not my employer's.
> 
> - Steve
> (co-author of the aforementioned Responsible Disclosure proposal, 
> which is presently quiet but not dead, but will always be subject to 
> public feedback) _______________________________________________
> Full-Disclosure - We believe in it.
> Full-Disclosure@...ts.netsys.com
> http://lists.netsys.com/mailman/listinfo/full-disclosure
> 
> 



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ