lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
From: annfer at duck.wafel.com (Ferguson, Ann)
Subject: Do not adopt OIS standards (Was: Public Review of OIS Security Vulnerability Reporting and Response Guidelines)

Here is my plea: do not adopt OIS standards, and do not advance OIS
legitimacy by submitting official feedback. This is not a beginning of an
angry rant - please allow me to explain.

I think that OIS guidelines are quite good in suggesting how the
disclosure process should look like. I also think there are numerous cases
when it is a smart idea to follow them. And yet, these very guidelines are
based on a very dangerous concept - that all the vendors and researchers
will share the common goal: customer security.

In free market economy, typical vendor's goals have very little to do with
the desire to offer a true blend of security of functionality; vendors
focus on trying to look good, and to stay competitive. This alone does not
mean they have no interest in improving security of their products, should
the public become concerned, but that's not my point.

The problem is: responsible disclosure policies, once popular enough to
become a de facto standard that research groups wanting to establish
public credibility are expected to follow, give (some) willing vendors a
powerful tool to marginalize or discredit folks who follow different
disclosure practices - particularly ones that are known to cause worse PR
fallout than others.

And now, whereas most of responsible disclosure policies try to claim they
propose a model that offers substantially higher protection of the
end-user against newly disclosed threats (which then is a good foundation
for vendors to rationalize why they should be always notified in generous
advance), these claims are not universally true. Various disclosure models
have been debated for long years, and none had been conclusively shown to
be superior to others; there are good arguments to back any of them, and
in many situations, specifics of an individual case (level of exposure,
anecdotal or first-hand experience with the vendor, etc) should be
considered by the researcher.

One thing we almost all agree on is that without full disclosure and all
the associated PR activity that forces vendors to react, the cyber world
would, after all those years, be a potentially more dangerous place.

Now if you give vendors a tool to effectively (albeit passively) defend
themselves against folks who do not play nice with them, we make it easier
for them to stick with the old and tried reactive security ("fix where it
breaks") model that cost them nothing. In reality, it is the researcher
who should enjoy protection, vendors are not entitled to any. The
researcher should be protected against frivolous lawsuits, threats,
groundless detention, and other wonders that only hurt disclosure. We
need to ENCOURAGE FULL DISCLOSURE, no matter when and how, no matter what
procedures are to be followed or violated. Disclosure is not perfect, but
as far as I can tell, it is (in the long run) far more beneficial to the
Internet as a whole, by keeping vendors accountable, forcing them to
invest in security, and by making their progress verifiable.

And yet, disclosure had suffered greatly in recent years, with the advent
of informal policies and ridiculous laws that discouraged providing
detailed information about flaws - only succeeding in providing a
competitive advantage to commercial security IDSes, IPSes, security
scanners or assessment software over community-based of homebrew products
(by the virtue of the former group having more money and manpower, and
access to "trusted" channels), and not affecting perhaps only black hats,
who usually have enough time on their hands to spend several days digging
through a vague report and analyzing code or reverse-engineering
applications.

There is no need to further advance this - we are not getting any more
secure, and there aren't fewer attacks. If we disclose, let's disclose in
a non-discriminatory manner. Even if we agree with basic OIS policy
premises and see this is a sanely constructed policy, the effects of its
widespread adaptation may be quite far fetched.

Another problem...

OIS is heavily vendor-controlled. This is not a conspiracy theory, just a
matter of facts. Microsoft and "unbreakable" Oracle are two giant market
forces with vital interests in how security disclosure is being handled,
whereas other companies - Foundstone, @Stake, Bindview, Guardent - once
reputable names in security research, are nowadays struggling in the
current economy to maintain their market niches. These niches could be
easily "embraced" by Microsoft, and so they are largely on their mercy, as
far as I can tell. Who else? Companies such as ISS or SCO are also likely
to be prone to manipulation, and their ethics are not particularly
well-regarded in the community, rightly or not.

Having a vendor-backed and vendor-controlled policy on how researchers
should "responsibly" report security flaws is a very dangerous game: as I
said, we give them tools to get rid of the type of disclosure that is most
embarrassing and most difficult to handle on PR level, and we get NOTHING
in return. Reactive security and fixing overflows one at a time
(nevertheless taking a month or two to resolve it) is dirt cheap. Although
OIS policies might be now considered a set of informational suggestions,
they work hard to establish them more firmly; the language used in those
documents (with all the "requirements" and such) leaves little doubt this
is meant to be a policy that is expected to be enforced (even if only by a
community policy).

In this particular case, since OIS is not representative of any major,
independent security research forces, and has close "evil vendor" ties
instead, it appears to be risky to give any legitimacy to procedures and
policies that may be used to, in turn, give legitimacy to the organization
itself.

Keep in mind that, even if you disagree with my objections to their
policies themselves, as soon OIS becomes a widely accepted and recognized
icon, these documents may gradually evolve in a manner that is even less
beneficial to the general public. Since the process of "public review and
discussion" is nowhere near being transparent, and the policy-making body
is closed, the situation is simply quite unhealthy.

Again, I am *NOT* advocating any disclosure scheme or timeline, I simply
oppose further advancing imbalance of power.

I ask you not to support OIS, even if you believe the policy is sane and
you hold no grudge against any of the members. If you feel like making the
cyber-world a better place, donate to EFF instead.

Ann



Powered by blists - more mailing lists