[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190809153427.7dsds3u5j2gegp7z@willie-the-truck>
Date: Fri, 9 Aug 2019 16:34:28 +0100
From: Will Deacon <will@...nel.org>
To: Peter Zijlstra <peterz@...radead.org>
Cc: linux-kernel@...r.kernel.org, Kees Cook <keescook@...omium.org>,
Ingo Molnar <mingo@...nel.org>,
Elena Reshetova <elena.reshetova@...el.com>,
Ard Biesheuvel <ard.biesheuvel@...aro.org>,
Hanjun Guo <guohanjun@...wei.com>,
Jan Glauber <jglauber@...vell.com>
Subject: Re: [PATCH 0/6] Rework REFCOUNT_FULL using atomic_fetch_* operations
On Fri, Aug 02, 2019 at 08:49:47PM +0200, Peter Zijlstra wrote:
> On Fri, Aug 02, 2019 at 11:09:54AM +0100, Will Deacon wrote:
>
> > Although the revised implementation passes all of the lkdtm REFCOUNT
> > tests, there is a race condition introduced by the deferred saturation
> > whereby if INT_MIN + 2 tasks take a reference on a refcount at
> > REFCOUNT_MAX and are each preempted between detecting overflow and
> > writing the saturated value without being rescheduled, then another task
> > may end up erroneously freeing the object when it drops the refcount and
> > sees zero. It doesn't feel like a particularly realistic case to me, but
> > I thought I should mention it in case somebody else knows better.
>
> So my OCD has always found that hole objectionable. Also I suppose the
> cmpxchg ones are simpler to understand.
>
> Maybe make this fancy stuff depend on !FULL ?
Hmm.
Right now, arm64 selects REFCOUNT_FULL, since I think it's important for
us to have this hardening enabled given the sorts of places we find
ourselves deployed. If the race above is a viable attack vector, then I'd
stick with the status quo, however Kees previously wrote this off as
"unrealistic":
https://lkml.kernel.org/r/CAGXu5jLXFA4=X5mC9ph9dZ0ZJaVkGXd2p1Vh8jH_EE15kVL6Hw@mail.gmail.com
and I'm inclined to agree with him given the conditions involved.
My understanding is that the current !FULL implementation (x86 only)
simply doesn't detect certain cases such as increment-from-zero, which
I think is different from being exposed to a theoretical race condition
involving billions of tasks each preempting each other one-by-one within
a handful of instructions. Even then, we'll still WARN!
Will
Powered by blists - more mailing lists