[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <58FDDAC2.11341.175B5A99@pageexec.freemail.hu>
Date: Mon, 24 Apr 2017 13:00:18 +0200
From: "PaX Team" <pageexec@...email.hu>
To: Kees Cook <keescook@...omium.org>,
Peter Zijlstra <peterz@...radead.org>
CC: linux-kernel@...r.kernel.org, Eric Biggers <ebiggers3@...il.com>,
Christoph Hellwig <hch@...radead.org>,
"axboe@...nel.dk" <axboe@...nel.dk>,
James Bottomley <James.Bottomley@...senpartnership.com>,
Elena Reshetova <elena.reshetova@...el.com>,
Hans Liljestrand <ishkamiel@...il.com>,
David Windsor <dwindsor@...il.com>, x86@...nel.org,
Ingo Molnar <mingo@...nel.org>, Arnd Bergmann <arnd@...db.de>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Jann Horn <jann@...jh.net>, davem@...emloft.net,
linux-arch@...r.kernel.org, kernel-hardening@...ts.openwall.com
Subject: Re: [PATCH] x86/refcount: Implement fast refcount_t handling
On 24 Apr 2017 at 10:32, Peter Zijlstra wrote:
> On Fri, Apr 21, 2017 at 03:09:39PM -0700, Kees Cook wrote:
> > This patch ports the x86-specific atomic overflow handling from PaX's
> > PAX_REFCOUNT to the upstream refcount_t API. This is an updated version
> > from PaX that eliminates the saturation race condition by resetting the
> > atomic counter back to the INT_MAX saturation value on both overflow and
> > underflow. To win a race, a system would have to have INT_MAX threads
> > simultaneously overflow before the saturation handler runs.
note that the above is wrong (and even contradicting itself and the code).
> And is this impossible? Highly unlikely I'll grant you, but absolutely
> impossible?
here's my analysis from a while ago:
http://www.openwall.com/lists/kernel-hardening/2017/01/05/19
> Also, you forgot nr_cpus in your bound. Afaict the worst case here is
> O(nr_tasks + 3*nr_cpus).
what does nr_cpus have to do with winning the race?
> Because PaX does it, is not a correctness argument. And this really
> wants one.
heh, do you want to tell me about how checking for a 0 refcount prevents
exploiting a bug?
Powered by blists - more mailing lists