lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7c8594d5943f99bc647951269606ae24dd1b37de.camel@redhat.com>
Date:   Wed, 22 Sep 2021 11:50:07 +0200
From:   nsaenzju@...hat.com
To:     Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Cc:     Peter Zijlstra <peterz@...radead.org>, akpm@...ux-foundation.org,
        frederic@...nel.org, linux-kernel@...r.kernel.org,
        linux-mm@...ck.org, tglx@...utronix.de, cl@...ux.com,
        juri.lelli@...hat.com, mingo@...hat.com, mtosatti@...hat.com,
        nilal@...hat.com, mgorman@...e.de, ppandit@...hat.com,
        williams@...hat.com, anna-maria@...utronix.de,
        linux-rt-users@...r.kernel.org
Subject: Re: [PATCH 2/6] mm/swap: Introduce alternative per-cpu LRU cache
 locking

On Wed, 2021-09-22 at 11:20 +0200, Sebastian Andrzej Siewior wrote:
> On 2021-09-22 10:47:07 [+0200], nsaenzju@...hat.com wrote:
> > > *why* use migrate_disable(), that's horrible!
> > 
> > I was trying to be mindful of RT. They don't appreciate people taking spinlocks
> > just after having disabled preemption.
> > 
> > I think getting local_lock(&locks->local) is my only option then. But it adds
> > an extra redundant spinlock in the RT+NOHZ_FULL case.
> 
> spin_lock() does not disable preemption on PREEMPT_RT. You don't
> disables preemption on purpose or did I miss that?

Sorry my message wasn't clear. Adding code for context:

+ static inline void lru_cache_lock(struct lru_cache_locks *locks)
+ {
+ 	if (static_branch_unlikely(&remote_pcpu_cache_access)) {
+ 		/* Avoid migration between this_cpu_ptr() and spin_lock() */
+ 		migrate_disable();

IIUC PeterZ would've preferred that I disable preemption here. And what I meant
to explain is that, given the spinlock below, I choose migrate_disable() over
preempt_disable() to cater for RT.

+ 		spin_lock(this_cpu_ptr(&locks->spin));
+ 	} else {


So, to make both worlds happy, I think the only option left is using the
local_lock (at the expense of extra overhead in the RT+NOHZ_FULL case):

+ 	if (static_branch_unlikely(&remote_pcpu_cache_access)) {
+ 		/* Local lock avoids migration between this_cpu_ptr() and spin_lock() */
+		local_lock(&locks->local);
+ 		spin_lock(this_cpu_ptr(&locks->spin));
+	} else {

-- 
Nicolás Sáenz

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ