lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
From: rms at telekom.yu (Radule Soskic)
Subject: Why release patches at all?

Hi, list members,

Following the recent threads that deal with pending vulnerabilities and
patches that are delayed 100+ days after the date of bug discovery, some
thoughts came up my mind. Let me disclose them.

Some observations first:

1. Looking at the list on the eEye's page:
http://www.eeye.com/html/Research/Upcoming/index.html and having in mind
that there are many others like that (remember the "unpatched" list, the
PivX one that died recently by suicide), one can wander why it takes so
long time to implement a correction in one's own code and publish a
patch. I wouldn't think of this as of pure unresponsiveness or vendor's
laziness. It might be more than that. I would ask what (and _how big_)
are the troubles behind the scene that cause such delays?
2. Regardless of #1 above, looking at the confronting concepts like full
disclosure vs. non-disclosure and similar, one might notice a certain
shift in favour of non-disclosure: more and more people agree on so
called "responsible disclosure", which means several things, like e.g.
work closely with vendor, report your discovery and wait for the patch,
postpone PoC release as to let vendor to release the patch first, etc.
etc. Security researchers, security community, even some of the hackers
themselves, all are tending to shift in the same direction. (Should I
say bending instead of tending? My English...ouch - bending is something
that is caused by external force, as I understand. Ehmmm, force - which
force?)
3. Regardless of #1 and #2, looking at how the PoC (should I say
exploits) are generated, one can notice a certain increase in popularity
of "post patch" techniques like reverse engineering of the patch code or
of the vulnerability scanner's actions. There are more and more
post-patch exploits comparing to genuine pre-patch 0day in the real
world. It is simply easier to make an exploit by reverse engineering if
you already know about the vulnerability itself, and have the working
patch in hands, than to discover the bug and produce 0day yourself
starting from scratch. We all know that fact, it's been one of the pivot
arguments of non-disclosure concept for a long time.

Now, let me pick up all the pieces and join them together, like in an
old joke about Hans and Fritz working in some toy factory, back in the
old days of East Germany. (Remember, they tried to assemble the parts of
the toy, and whatever the way they did it, it always ended up as a
Russian machine gun)...

So, what came up to my mind was this:

May we expect in the close future the next step in the evolution of
non-disclosure concept: the one that could be named as
*not-release-the-patch-at-all*. I mean, may it happen that a certain
vulnerability (like e.g. any of those on the eEye's list) will never get
patched, by deliberate decision of the vendor. The vendor simply never
releases the patch, period. 

Why it should happen, you ask?

Here is the calculation: 
- Let's assume, that the bug is so bad that the impact of the eventual
exploit code coming alive is unacceptably horrifying. 
- Next, we all know that even after the patch is released a rather great
number of machines still stay unpached and vulnerable (partly caused by
lazy and ignorant admins, partly caused by inherent inefficiency of the
available patch deployment facilities). It just happens - even if you
substitute anti-virus campaign for bug fixing, like it happened in
MSBlast case (Paradoxically, this worm saved many asses, really, having
in mind how efficient were some pre-worm exploits). People not only
don't patch, they do not protect themselves against worms and viruses.
Let us assume that the number of such unpatcehd machines is constant,
invariant to the criticality level of the patch, and equal to N.
Whatever we do - this number N never gets smaller than it is.
- Now, if we release the patch, the risk of exploit code being created
and distributed around the Net might be R1. This is a small, but likely
increasing number as we see from #3 above.
- Alternatively, if we don't release the patch, than we have the risk of
someone discovering and/or disclosing the details of the bug, plus the
risk of someone creates and/or discloses the exploit code, plus the risk
of reputation erosion caused by our slow response in patch release
process etc. This is sum of several rather small numbers, which is
totalling to R2.

Now, is it possible that R1 is larger than R2 ...?? Yes of course, due
to the reasons explained in #3 above. But, the difference (R1-R2) might
be too small or insignificant to justify introducing the whole new
practise. You know, if we suddenly decide not to publish patches it
might cause uncertain and unforeseeable troubles, so better to stick on
the old way of doing things.

Than again, what if the bug is _so very bad_ (hypothetical compromise of
which is having a total impact of F, where F is being a _very huge_
number), while the statement R1>R2 being true? Impact counts, of course,
the difference is now F times bigger [it's now F*(R1 - R2)] which can be
significant enough to justify the critical move - toward not releasing
any patches. Just imagine the N vulnerable machines mentioned above
being owned at 0+ time, either by "manual work" or by worm automation.
Is it really acceptable - can anybody take and manage such the risk? 

Simply speaking, the vendor might come up to this: better to shut up and
not do anything, than to give people the patch and then suffer from
mega-compromise of the machines that will certainly remain unpached
anyway. If only, of course, the projection of this mega-compromise looks
catastrophic enough to them (which is what we don't know). 


Now, let me offer a paranoid vision:

Due to the inevitable failure of "patch, patch, patch" security concept,
the philosophical question "to disclose or not to disclose" might well
evolve to "to release the patch or not to release the patch at all". 

Look - it's not that impossible: just kill the bloody lists of pending
patches that hang around the Net, stop doing anything, and wait - people
will forget eventually - sooner or later. Welcome to the bliss of the
ignorance and unconsciousness...Don't know about the bugs, be happy.

End of the story. Let's hope it won't (and did not) happen ever.


Sorry for the bandwidth waste.


cikasole




Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ