[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210922092039.2j6efnkhmfxuzjnx@linutronix.de>
Date: Wed, 22 Sep 2021 11:20:39 +0200
From: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
To: nsaenzju@...hat.com
Cc: Peter Zijlstra <peterz@...radead.org>, akpm@...ux-foundation.org,
frederic@...nel.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, tglx@...utronix.de, cl@...ux.com,
juri.lelli@...hat.com, mingo@...hat.com, mtosatti@...hat.com,
nilal@...hat.com, mgorman@...e.de, ppandit@...hat.com,
williams@...hat.com, anna-maria@...utronix.de,
linux-rt-users@...r.kernel.org
Subject: Re: [PATCH 2/6] mm/swap: Introduce alternative per-cpu LRU cache
locking
On 2021-09-22 10:47:07 [+0200], nsaenzju@...hat.com wrote:
> > *why* use migrate_disable(), that's horrible!
>
> I was trying to be mindful of RT. They don't appreciate people taking spinlocks
> just after having disabled preemption.
>
> I think getting local_lock(&locks->local) is my only option then. But it adds
> an extra redundant spinlock in the RT+NOHZ_FULL case.
spin_lock() does not disable preemption on PREEMPT_RT. You don't
disables preemption on purpose or did I miss that?
Sebastian
Powered by blists - more mailing lists