lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210618033521.GE4397@paulmck-ThinkPad-P17-Gen-1>
Date:   Thu, 17 Jun 2021 20:35:21 -0700
From:   "Paul E. McKenney" <paulmck@...nel.org>
To:     Andy Lutomirski <luto@...nel.org>
Cc:     Nicholas Piggin <npiggin@...il.com>,
        "Peter Zijlstra (Intel)" <peterz@...radead.org>,
        Rik van Riel <riel@...riel.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Dave Hansen <dave.hansen@...el.com>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        linux-mm@...ck.org,
        Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
        the arch/x86 maintainers <x86@...nel.org>
Subject: Re: [PATCH 4/8] membarrier: Make the post-switch-mm barrier explicit

On Thu, Jun 17, 2021 at 05:06:02PM -0700, Andy Lutomirski wrote:
> On 6/17/21 8:02 AM, Paul E. McKenney wrote:
> > On Wed, Jun 16, 2021 at 10:32:15PM -0700, Andy Lutomirski wrote:
> >> I would appreciate everyone's thoughts as to whether this scheme is sane.
> >>
> >> Paul, I'm adding you for two reasons.  First, you seem to enjoy bizarre locking schemes.  Secondly, because maybe RCU could actually work here.  The basic idea is that we want to keep an mm_struct from being freed at an inopportune time.  The problem with naively using RCU is that each CPU can use one single mm_struct while in an idle extended quiescent state (but not a user extended quiescent state).  So rcu_read_lock() is right out.  If RCU could understand this concept, then maybe it could help us, but this seems a bit out of scope for RCU.
> > 
> > OK, I should look at your patch, but that will be after morning meetings.
> > 
> > On RCU and idle, much of the idle code now allows rcu_read_lock() to be
> > directly, thanks to Peter's recent work.  Any sort of interrupt or NMI
> > from idle can also use rcu_read_lock(), including the IPIs that are now
> > done directly from idle.  RCU_NONIDLE() makes RCU pay attention to the
> > code supplied as its sole argument.
> > 
> > Or is your patch really having the CPU expect a mm_struct to stick around
> > across the full idle sojourn, and without the assistance of mmgrab()
> > and mmdrop()?
> 
> I really do expect it to stick around across the full idle sojourn.
> Unless RCU is more magical than I think it is, this means I can't use RCU.

You are quite correct.  And unfortunately, making RCU pay attention
across the full idle sojourn would make the battery-powered embedded
guys quite annoyed.  And would result in OOM.  You could use something
like percpu_ref, but at a large memory expense.  You could use something
like SRCU or Tasks Trace RCU, but this would increase the overhead of
freeing mm_struct structures.

Your use of per-CPU pointers seems sound in principle, but I am uncertain
of some of the corner cases.  And either current mainline gained an
mmdrop-balance bug or rcutorture is also uncertain of those corner cases.
But again, the overall concept looks quite good.  Just some bugs to
be found and fixed, whether in this patch or in current mainline.
As always...  ;-)

						Thanx, Paul

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ