lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20170721142255.586224f0db9cf0714e654859@linux-foundation.org>
Date:   Fri, 21 Jul 2017 14:22:55 -0700
From:   Andrew Morton <akpm@...ux-foundation.org>
To:     Ingo Molnar <mingo@...nel.org>
Cc:     Kees Cook <keescook@...omium.org>,
        Peter Zijlstra <peterz@...radead.org>,
        Josh Poimboeuf <jpoimboe@...hat.com>,
        Christoph Hellwig <hch@...radead.org>,
        "Eric W. Biederman" <ebiederm@...ssion.com>,
        Jann Horn <jannh@...gle.com>,
        Eric Biggers <ebiggers3@...il.com>,
        Elena Reshetova <elena.reshetova@...el.com>,
        Hans Liljestrand <ishkamiel@...il.com>,
        Greg KH <gregkh@...uxfoundation.org>,
        Alexey Dobriyan <adobriyan@...il.com>,
        "Serge E. Hallyn" <serge@...lyn.com>, arozansk@...hat.com,
        Davidlohr Bueso <dave@...olabs.net>,
        Manfred Spraul <manfred@...orfullife.com>,
        "axboe@...nel.dk" <axboe@...nel.dk>,
        James Bottomley <James.Bottomley@...senpartnership.com>,
        "x86@...nel.org" <x86@...nel.org>, Arnd Bergmann <arnd@...db.de>,
        "David S. Miller" <davem@...emloft.net>,
        Rik van Riel <riel@...hat.com>, linux-kernel@...r.kernel.org,
        linux-arch <linux-arch@...r.kernel.org>,
        "kernel-hardening@...ts.openwall.com" 
        <kernel-hardening@...ts.openwall.com>
Subject: Re: [PATCH v6 0/2] x86: Implement fast refcount overflow protection

On Thu, 20 Jul 2017 11:11:06 +0200 Ingo Molnar <mingo@...nel.org> wrote:

> 
> * Kees Cook <keescook@...omium.org> wrote:
> 
> > This implements refcount_t overflow protection on x86 without a noticeable
> > performance impact, though without the fuller checking of REFCOUNT_FULL.
> > This is done by duplicating the existing atomic_t refcount implementation
> > but with normally a single instruction added to detect if the refcount
> > has gone negative (i.e. wrapped past INT_MAX or below zero). When
> > detected, the handler saturates the refcount_t to INT_MIN / 2. With this
> > overflow protection, the erroneous reference release that would follow
> > a wrap back to zero is blocked from happening, avoiding the class of
> > refcount-over-increment use-after-free vulnerabilities entirely.
> > 
> > Only the overflow case of refcounting can be perfectly protected, since it
> > can be detected and stopped before the reference is freed and left to be
> > abused by an attacker. This implementation also notices some of the "dec
> > to 0 without test", and "below 0" cases. However, these only indicate that
> > a use-after-free may have already happened. Such notifications are likely
> > avoidable by an attacker that has already exploited a use-after-free
> > vulnerability, but it's better to have them than allow such conditions to
> > remain universally silent.
> > 
> > On first overflow detection, the refcount value is reset to INT_MIN / 2
> > (which serves as a saturation value), the offending process is killed,
> > and a report and stack trace are produced. When operations detect only
> > negative value results (such as changing an already saturated value),
> > saturation still happens but no notification is performed (since the
> > value was already saturated).
> > 
> > On the matter of races, since the entire range beyond INT_MAX but before
> > 0 is negative, every operation at INT_MIN / 2 will trap, leaving no
> > overflow-only race condition.
> > 
> > As for performance, this implementation adds a single "js" instruction
> > to the regular execution flow of a copy of the standard atomic_t refcount
> > operations. (The non-"and_test" refcount_dec() function, which is uncommon
> > in regular refcount design patterns, has an additional "jz" instruction
> > to detect reaching exactly zero.) Since this is a forward jump, it is by
> > default the non-predicted path, which will be reinforced by dynamic branch
> > prediction. The result is this protection having virtually no measurable
> > change in performance over standard atomic_t operations. The error path,
> > located in .text.unlikely, saves the refcount location and then uses UD0
> > to fire a refcount exception handler, which resets the refcount, handles
> > reporting, and returns to regular execution. This keeps the changes to
> > .text size minimal, avoiding return jumps and open-coded calls to the
> > error reporting routine.
> 
> Pretty nice!
> 

Yes, this is a relief.

Do we have a feeling for how feasible/difficult it will be for other
architectures to implement such a thing?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ