[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <601c504a381de76d6e39ae2fe86456c411c8b62e.camel@redhat.com>
Date: Tue, 22 Feb 2022 17:16:53 +0100
From: Nicolas Saenz Julienne <nsaenzju@...hat.com>
To: Marcelo Tosatti <mtosatti@...hat.com>
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org,
Minchan Kim <minchan@...nel.org>,
Matthew Wilcox <willy@...radead.org>,
Mel Gorman <mgorman@...hsingularity.net>,
Juri Lelli <juril@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
"Paul E. McKenney" <paulmck@...nel.org>
Subject: Re: [patch 1/2] mm: protect local lock sections with rcu_read_lock
(on RT)
On Tue, 2022-02-22 at 12:51 -0300, Marcelo Tosatti wrote:
> On Tue, Feb 22, 2022 at 04:21:26PM +0100, Nicolas Saenz Julienne wrote:
> > On Tue, 2022-02-22 at 11:47 -0300, Marcelo Tosatti wrote:
> > > For the per-CPU LRU page vectors, augment the local lock protected
> > > code sections with rcu_read_lock.
> > >
> > > This makes it possible to replace the queueing of work items on all
> > > CPUs by synchronize_rcu (which is necessary to run FIFO:1 applications
> > > uninterrupted on isolated CPUs).
> >
> > I don't think this is needed. In RT local_locks use a spinlock. See
> > kernel/locking/spinlock_rt.c:
> >
> > "The RT [spinlock] substitutions explicitly disable migration and take
> > rcu_read_lock() across the lock held section."
>
> Nice! Then the migrate_disable from __local_lock and friends seems unnecessary as
> well
>
> #define __local_lock(__lock) \
> do { \
> migrate_disable(); \
> spin_lock(this_cpu_ptr((__lock))); \
> } while (0)
>
It's needed as you might migrate between:
cpu1_lock = this_cpu_ptr(__lock);
// migrate here to cpu2
spin_lock(cpu1_lock);
// unprotected write into cpu2 lists
Regards,
--
Nicolás Sáenz
Powered by blists - more mailing lists