lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Sun, 28 Feb 2010 21:51:18 +0100
From: Christian Sciberras <uuf6429@...il.com>
To: Pavel Kankovsky <peak@...o.troja.mff.cuni.cz>
Cc: full-disclosure@...ts.grok.org.uk
Subject: Re: Two MSIE 6.0/7.0 NULL pointer crashes

"Sometimes the vulnerability itself is a functional requirement (or
considered to be one of them). Has anyone mentioned ActiveX?"
Or NPAPI for the matter. Really, other then the
automated-after-user-accept-installation they're both the same.

On Sun, Feb 28, 2010 at 9:22 PM, Pavel Kankovsky <
peak@...o.troja.mff.cuni.cz> wrote:

> On Sun, 24 Jan 2010, Dan Kaminsky wrote:
>
> It took me more than one month to write this response? Ouch!
>
> > >  When you discover the program is designed too badly to be
> > > maintained, the best strategy is to rewrite it.
> > No question.  And how long do you think that takes?
>
> It depends. Probably in the order of several years for a big application.
>
> On the other hand, existing code is not always so bad one has to throw it
> out all and rewrite everything from the scratch in one giant step.
>
> > Remember when Netscape decided to throw away the Navigator 4.5
> > codebase, in favor of Mozilla/Seamonkey?  Remember how they had to do
> > that *again* with Mozilla/Gecko?
>
> Mozilla (even the old Mozilla Application Suite known as Seamonkey today)
> has always been based on Gecko (aka "new layout", "NGLayout").
>
> The development of Gecko started in 1997 as an internal Netscape project.
> Old Netscape Communicator source (most of it) was released in March 1998.
> The decision not to use it was made in October 1998. Gecko source was
> released in December 1998. Mozilla 0.6 was released in December 2000,
> 0.9 in May 2001 and 1.0 in June 2002. This makes approximately 5 years.
>
> Firefox started as a "mozilla/browser" branch approximately in April 2002
> (the idea is probably dating back to mid 2001). The first public version
> known as Phoenix 0.1 was released in September 2002, 0.9 was released in
> June 2004, 1.0 in November 2004. 2.5 years.
>
> To put thing into a broader perspective: MSIE 5.0 was released in March
> 1999, 6.0 in August 2001, 7.0 in October 2006, and 8.0 in March 2009.
> This makes 2.5 years from 5.0 to 6.0, 5 years to 7.0 and 2.5 years to 8.0.
> The development of Google Chrome is reported to have started in spring
> 2006 and 1.0 was released in December 2008. 2.5 years again (but they
> reused WebKit and other 3rd party components).
>
> > "Hyperturing computing power" Not really sure what that means,
>
> The ability to solve problems of Turing degree [1] greater than zero.
> "Superturing" is probably a more common term although various terms
> starting with "hyper-"  are used as well [2].
>
> (Alternatively, it can relate to a certain kind of AIs in Orion's Arm
> universe [3] but that meaning is not relevant here. <g>)
>
> For the most part it is a purely theoretical notion but there is at least
> one kind of oracle that is more or less physically feasible: a hardware
> random number generator--such an oracle might look pointless but quite a
> lot of cryptography relies on the ability to generate numbers that
> cannot be guessed by an adversary.
>
> Anyway, real computer are not true Turing machines and they are not turing
> complete. The point of my comment, translated into a more realistic
> setting, is as follows: one must assume the attacker can wield much more
> computing power than the defender.
>
> [1] <http://en.wikipedia.org/wiki/Turing_degree>
> [2] <http://en.wikipedia.org/wiki/Hypercomputation>
> [3] <http://www.orionsarm.com/eg-topic/45c54923c3496>
>
> > > But I do not think this case is much different from the previous one:
> > > most, if not all, of those bugs are elementary integrity violations
> > > (not prevented because the boundary between trusted and untrusted data
> > > is not clear enough) and race conditions (multithreading with locks is
> > > an idea on the same level as strcpy).
> > Nah, it's actually a lot worse. You have to start thinking in terms of
> > state explosion -- having turing complete access to even some of the
> > state of a remote system creates all sorts of new states that, even if
> > *reachable* otherwise, would never be *predictably reachable*.
>
> I dare to say it can make the analysis more complicated if the
> ill-defined difficulty of exploitation is taken into consideration.
>
> In many cases the ability to execute a predefined sequence of operations
> is everything you need to reach an arbitrary state of the system (from a
> known initial state). You do not need anything as strong as a Turing
> machine, even a finite state machine is too powerful, a single finite
> sequence of operations (or perhaps a finite set of them) is sufficient.
>
> > I mean, use-after-free becomes ludicrously easier when you can grab a
> > handle and cause a free.
>
> I admit use-after-free does not fit well into the two categories I
> mentioned. But it is still a straightforward violation of a simple
> property (do not deallocate memory as long as any references to it exist)
> and it is quite easy to avoid it (e.g. use a garbage collector).
>
> > Sure.  But we're not talking about what should be done before you
> > write.  We're talking about what happens when you screw up.
>
> I do not think it is reasonable to separate these two questions.
> After all people are supposed to learn from their mistakes and avoid them
> in the future.
>
> > > (An interesting finding regarding the renegotiation issue: [...]
> > Eh.  This was a subtle one, [...]
>
> I do not want to downplay the ingenuity of Marsh Ray and Steve Dispensa
> (and Martin Rex) but...
>
> Any attempt to formalize integrity properties SSL/TLS is supposed to
> guarantee would inevitably lead to something along the lines of "all data
> sent/received by a server within the context of a certain session must
> have been received/sent by the same client". And I find it rather
> unplausible the problem with renegotiations would avoid detection if
> those properties were checked thoroughly.
>
> > >> c) The system needs to work entirely the same after.
> > > Not entirely. You want to get rid of the vulnerability.
> > I wouldn't consider being vulnerable "working" :)  But point taken.
> > The system needs to meet its functional requirements entirely the same
> > after.
>
> Sometimes the vulnerability itself is a functional requirement (or
> considered to be one of them). Has anyone mentioned ActiveX?
>
> --
> Pavel Kankovsky aka Peak                          / Jeremiah 9:21        \
> "For death is come up into our MS Windows(tm)..." \ 21st century edition /
>
>
> _______________________________________________
> Full-Disclosure - We believe in it.
> Charter: http://lists.grok.org.uk/full-disclosure-charter.html
> Hosted and sponsored by Secunia - http://secunia.com/
>

Content of type "text/html" skipped

_______________________________________________
Full-Disclosure - We believe in it.
Charter: http://lists.grok.org.uk/full-disclosure-charter.html
Hosted and sponsored by Secunia - http://secunia.com/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ