lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180215114927.GV25201@hirez.programming.kicks-ass.net>
Date:   Thu, 15 Feb 2018 12:49:27 +0100
From:   Peter Zijlstra <peterz@...radead.org>
To:     Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
Cc:     Mark Rutland <mark.rutland@....com>,
        Will Deacon <will.deacon@....com>,
        linux-kernel <linux-kernel@...r.kernel.org>,
        linux-arm-kernel <linux-arm-kernel@...ts.infradead.org>,
        Ingo Molnar <mingo@...nel.org>
Subject: Re: arm64/v4.16-rc1: KASAN: use-after-free Read in finish_task_switch

On Wed, Feb 14, 2018 at 06:53:44PM +0000, Mathieu Desnoyers wrote:
> However, given the scenario involves multiples CPUs (one doing exit_mm(),
> the other doing context switch), the actual order of perceived load/store
> can be shuffled. And AFAIU nothing prevents the CPU from ordering the
> atomic_inc() done by mmgrab(mm) _after_ the store to current->mm.
> 
> I wonder if we should not simply add a smp_mb__after_atomic() into
> mmgrab() instead ? I see that e.g. futex.c does:

Don't think so, the futex case is really rather special and I suspect
this one is too. I would much rather have explicit comments rather than
implicit works by magic.

As per the rationale used for refcount_t, increments should be
unordered, because you ACQUIRE your object _before_ you can do the
increment.

The futex thing is simply abusing a bunch of implied barriers and
patching up the holes in paths that didn't already imply a barrier in
order to avoid having to add explicit barriers (which had measurable
performance issues).

And here we have explicit ordering outside of the reference counting
too, we want to ensure the reference is incremented before we modify
a second object.

This ordering is not at all related to acquiring the reference, so
bunding it seems odd.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ