lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190617200750.GB5565@dc5-eodlnx05.marvell.com>
Date:   Mon, 17 Jun 2019 20:07:54 +0000
From:   Jayachandran Chandrasekharan Nair <jnair@...vell.com>
To:     Will Deacon <will.deacon@....com>
CC:     Ard Biesheuvel <ard.biesheuvel@...aro.org>,
        Kees Cook <keescook@...omium.org>,
        "catalin.marinas@....com" <catalin.marinas@....com>,
        Jan Glauber <jglauber@...vell.com>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "linux-arm-kernel@...ts.infradead.org" 
        <linux-arm-kernel@...ts.infradead.org>
Subject: Re: [RFC] Disable lockref on arm64

On Mon, Jun 17, 2019 at 06:26:20PM +0100, Will Deacon wrote:
> On Mon, Jun 17, 2019 at 01:33:19PM +0200, Ard Biesheuvel wrote:
> > On Sun, 16 Jun 2019 at 23:31, Kees Cook <keescook@...omium.org> wrote:
> > > On Sat, Jun 15, 2019 at 04:18:21PM +0200, Ard Biesheuvel wrote:
> > > > Yes, I am using the same saturation point as x86. In this example, I
> > > > am not entirely sure I understand why it matters, though: the atomics
> > > > guarantee that the write by CPU2 fails if CPU1 changed the value in
> > > > the mean time, regardless of which value it wrote.
> > > >
> > > > I think the concern is more related to the likelihood of another CPU
> > > > doing something nasty between the moment that the refcount overflows
> > > > and the moment that the handler pins it at INT_MIN/2, e.g.,
> > > >
> > > > > CPU 1                   CPU 2
> > > > > inc()
> > > > >   load INT_MAX
> > > > >   about to overflow?
> > > > >   yes
> > > > >
> > > > >   set to 0
> > > > >                          <insert exploit here>
> > > > >   set to INT_MIN/2
> > >
> > > Ah, gotcha, but the "set to 0" is really "set to INT_MAX+1" (not zero)
> > > if you're using the same saturation.
> > >
> > 
> > Of course. So there is no issue here: whatever manipulations are
> > racing with the overflow handler can never result in the counter to
> > unsaturate.
> > 
> > And actually, moving the checks before the stores is not as trivial as
> > I thought, E.g., for the LSE refcount_add case, we have
> > 
> >         "       ldadd           %w[i], w30, %[cval]\n"                  \
> >         "       adds            %w[i], %w[i], w30\n"                    \
> >         REFCOUNT_PRE_CHECK_ ## pre (w30))                               \
> >         REFCOUNT_POST_CHECK_ ## post                                    \
> > 
> > and changing this into load/test/store defeats the purpose of using
> > the LSE atomics in the first place.
> > 
> > On my single core TX2, the comparative performance is as follows
> > 
> > Baseline: REFCOUNT_TIMING test using REFCOUNT_FULL (LSE cmpxchg)
> >       191057942484      cycles                    #    2.207 GHz
> >       148447589402      instructions              #    0.78  insn per
> > cycle
> > 
> >       86.568269904 seconds time elapsed
> > 
> > Upper bound: ATOMIC_TIMING
> >       116252672661      cycles                    #    2.207 GHz
> >        28089216452      instructions              #    0.24  insn per
> > cycle
> > 
> >       52.689793525 seconds time elapsed
> > 
> > REFCOUNT_TIMING test using LSE atomics
> >       127060259162      cycles                    #    2.207 GHz
> 
> Ok, so assuming JC's complaint is valid, then these numbers are compelling.

Let me try to point out the issues I see again, to make sure that we are
not talking just about micro-benchmarks.

That first issue: on arm64, REFCOUNT_FULL is 'select'-ed. There is
no option to go to the atomics implementation or a x86-like compromise
implementation, without patching the kernel. Currently we are stuck with
a function call for what has to be a single atomic instruction.

The second part is that REFCOUNT_FULL uses a unbounded CAS loop which is
problematic when the core count increases and when there is contention.
Upto to some level of contention, the CAS loop works fine. But when we
go to the order of a hundred CPUs this becomes an issue. The LDADD
series of atomics can be handled fairly well by hardware even with
heavy contention, but in case of CAS(or LDXR/STXR) loops, getting it
correct in hardware is much more difficult. There is nothing in the
arm64 ISA to manage this correctly. As discussed earlier in the thread,
WFE does not work, YIELD is troublesome, and there is no 'wait in low
power for exclusive access' hint instruction. This is not a TX2
specific issue.

The testcase I provided was not really a microbenchmark. That was a
simplified webserver testcase where multiple threads read a small file
in parallel. With Ubuntu configuration (apparmor enabled) and when
other things line up (I had made the file & dir non-writable), you
can see that refcount is the top function. I expect this kind of
situation to be more frequent as more subsystems move to refcount_t.

> In particular, my understanding of this thread is that your optimised
> implementation doesn't actually sacrifice any precision; it just changes
> the saturation behaviour in a way that has no material impact. Kees, is that
> right?
> 
> If so, I'm not against having this for arm64, with the premise that we can
> hide the REFCOUNT_FULL option entirely given that it would only serve to
> confuse if exposed.

Thanks for looking into this! From the discussion it seems likely
that we can get a version of Ard's patch in, which does not have CAS
loop in most cases.

JC

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ