[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210503043430.GA16059@lespinasse.org>
Date: Sun, 2 May 2021 21:34:30 -0700
From: Michel Lespinasse <michel@...pinasse.org>
To: "Paul E. McKenney" <paulmck@...nel.org>
Cc: Michel Lespinasse <michel@...pinasse.org>,
Andy Lutomirski <luto@...nel.org>,
Linux-MM <linux-mm@...ck.org>,
Laurent Dufour <ldufour@...ux.ibm.com>,
Peter Zijlstra <peterz@...radead.org>,
Michal Hocko <mhocko@...e.com>,
Matthew Wilcox <willy@...radead.org>,
Rik van Riel <riel@...riel.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Suren Baghdasaryan <surenb@...gle.com>,
Joel Fernandes <joelaf@...gle.com>,
Rom Lemarchand <romlem@...gle.com>,
Linux-Kernel <linux-kernel@...r.kernel.org>
Subject: Re: [RFC PATCH 13/37] mm: implement speculative handling in
__handle_mm_fault().
On Sun, May 02, 2021 at 08:40:49PM -0700, Paul E. McKenney wrote:
> @@ -634,6 +644,12 @@ do { \
> * sections, invocation of the corresponding RCU callback is deferred
> * until after the all the other CPUs exit their critical sections.
> *
> + * In recent kernels, synchronize_rcu() and call_rcu() also wait for
> + * regions of code with preemption disabled, including regions of code
> + * with interrupts or softirqs disabled. If your kernel is old enough
> + * for synchronize_sched() to be defined, only code enclosed within
> + * rcu_read_lock() and rcu_read_unlock() are guaranteed to be waited for.
> + *
> * Note, however, that RCU callbacks are permitted to run concurrently
> * with new RCU read-side critical sections. One way that this can happen
> * is via the following sequence of events: (1) CPU 0 enters an RCU
You still have "old enough" / "recent kernels" here. But maybe it's OK
given that you added relevant version numbers elsewhere.
Everything else looks great to me.
Thanks,
--
Michel "walken" Lespinasse
Powered by blists - more mailing lists