lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1623911501.q97zemobmw.astroid@bobo.none>
Date:   Thu, 17 Jun 2021 16:51:49 +1000
From:   Nicholas Piggin <npiggin@...il.com>
To:     Andy Lutomirski <luto@...nel.org>,
        "Peter Zijlstra (Intel)" <peterz@...radead.org>,
        Rik van Riel <riel@...riel.com>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        Dave Hansen <dave.hansen@...el.com>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        linux-mm@...ck.org,
        Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
        "Paul E. McKenney" <paulmck@...nel.org>,
        the arch/x86 maintainers <x86@...nel.org>
Subject: Re: [PATCH 4/8] membarrier: Make the post-switch-mm barrier explicit

Excerpts from Andy Lutomirski's message of June 17, 2021 3:32 pm:
> On Wed, Jun 16, 2021, at 7:57 PM, Andy Lutomirski wrote:
>> 
>> 
>> On Wed, Jun 16, 2021, at 6:37 PM, Nicholas Piggin wrote:
>> > Excerpts from Andy Lutomirski's message of June 17, 2021 4:41 am:
>> > > On 6/16/21 12:35 AM, Peter Zijlstra wrote:
>> > >> On Wed, Jun 16, 2021 at 02:19:49PM +1000, Nicholas Piggin wrote:
>> > >>> Excerpts from Andy Lutomirski's message of June 16, 2021 1:21 pm:
>> > >>>> membarrier() needs a barrier after any CPU changes mm.  There is currently
>> > >>>> a comment explaining why this barrier probably exists in all cases.  This
>> > >>>> is very fragile -- any change to the relevant parts of the scheduler
>> > >>>> might get rid of these barriers, and it's not really clear to me that
>> > >>>> the barrier actually exists in all necessary cases.
>> > >>>
>> > >>> The comments and barriers in the mmdrop() hunks? I don't see what is 
>> > >>> fragile or maybe-buggy about this. The barrier definitely exists.
>> > >>>
>> > >>> And any change can change anything, that doesn't make it fragile. My
>> > >>> lazy tlb refcounting change avoids the mmdrop in some cases, but it
>> > >>> replaces it with smp_mb for example.
>> > >> 
>> > >> I'm with Nick again, on this. You're adding extra barriers for no
>> > >> discernible reason, that's not generally encouraged, seeing how extra
>> > >> barriers is extra slow.
>> > >> 
>> > >> Both mmdrop() itself, as well as the callsite have comments saying how
>> > >> membarrier relies on the implied barrier, what's fragile about that?
>> > >> 
>> > > 
>> > > My real motivation is that mmgrab() and mmdrop() don't actually need to
>> > > be full barriers.  The current implementation has them being full
>> > > barriers, and the current implementation is quite slow.  So let's try
>> > > that commit message again:
>> > > 
>> > > membarrier() needs a barrier after any CPU changes mm.  There is currently
>> > > a comment explaining why this barrier probably exists in all cases. The
>> > > logic is based on ensuring that the barrier exists on every control flow
>> > > path through the scheduler.  It also relies on mmgrab() and mmdrop() being
>> > > full barriers.
>> > > 
>> > > mmgrab() and mmdrop() would be better if they were not full barriers.  As a
>> > > trivial optimization, mmgrab() could use a relaxed atomic and mmdrop()
>> > > could use a release on architectures that have these operations.
>> > 
>> > I'm not against the idea, I've looked at something similar before (not
>> > for mmdrop but a different primitive). Also my lazy tlb shootdown series 
>> > could possibly take advantage of this, I might cherry pick it and test 
>> > performance :)
>> > 
>> > I don't think it belongs in this series though. Should go together with
>> > something that takes advantage of it.
>> 
>> I’m going to see if I can get hazard pointers into shape quickly.
> 
> Here it is.  Not even boot tested!
> 
> https://git.kernel.org/pub/scm/linux/kernel/git/luto/linux.git/commit/?h=sched/lazymm&id=ecc3992c36cb88087df9c537e2326efb51c95e31
> 
> Nick, I think you can accomplish much the same thing as your patch by:
> 
> #define for_each_possible_lazymm_cpu while (false)

I'm not sure what you mean? For powerpc, other CPUs can be using the mm 
as lazy at this point. I must be missing something.

> 
> although a more clever definition might be even more performant.
> 
> I would appreciate everyone's thoughts as to whether this scheme is sane.

powerpc has no use for it, after the series in akpm's tree there's just
a small change required for radix TLB flushing to make the final flush 
IPI also purge lazies, and then the shootdown scheme runs with zero
additional IPIs so essentially no benefit to the hazard pointer games.
I have found the additional IPIs aren't bad anyway, so not something 
we'd bother trying to optmise away on hash, which is slowly being
de-prioritized.

I must say, I still see active_mm featuring prominently in our patch
which comes as a surprise. I would have thought some preparation and 
cleanup work first to fix the x86 deficienies you were talking about 
should go in first, I'm eager to see those. But either way I don't see
a fundamental reason this couldn't be done to support archs for which 
the standard or shootdown refcounting options aren't sufficient.

IIRC I didn't see a fundamental hole in it last time you posted the
idea but I admittedly didn't go through it super carefully.

Thanks,
Nick

> 
> Paul, I'm adding you for two reasons.  First, you seem to enjoy bizarre locking schemes.  Secondly, because maybe RCU could actually work here.  The basic idea is that we want to keep an mm_struct from being freed at an inopportune time.  The problem with naively using RCU is that each CPU can use one single mm_struct while in an idle extended quiescent state (but not a user extended quiescent state).  So rcu_read_lock() is right out.  If RCU could understand this concept, then maybe it could help us, but this seems a bit out of scope for RCU.
> 
> --Andy
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ