lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YhUGl2F6gZxaNA7v@fuller.cnet>
Date:   Tue, 22 Feb 2022 12:51:51 -0300
From:   Marcelo Tosatti <mtosatti@...hat.com>
To:     Nicolas Saenz Julienne <nsaenzju@...hat.com>
Cc:     linux-kernel@...r.kernel.org, linux-mm@...ck.org,
        Minchan Kim <minchan@...nel.org>,
        Matthew Wilcox <willy@...radead.org>,
        Mel Gorman <mgorman@...hsingularity.net>,
        Juri Lelli <juril@...hat.com>,
        Thomas Gleixner <tglx@...utronix.de>,
        Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
        "Paul E. McKenney" <paulmck@...nel.org>
Subject: Re: [patch 1/2] mm: protect local lock sections with rcu_read_lock
 (on RT)

On Tue, Feb 22, 2022 at 04:21:26PM +0100, Nicolas Saenz Julienne wrote:
> On Tue, 2022-02-22 at 11:47 -0300, Marcelo Tosatti wrote:
> > For the per-CPU LRU page vectors, augment the local lock protected
> > code sections with rcu_read_lock.
> > 
> > This makes it possible to replace the queueing of work items on all 
> > CPUs by synchronize_rcu (which is necessary to run FIFO:1 applications
> > uninterrupted on isolated CPUs).
> 
> I don't think this is needed. In RT local_locks use a spinlock. See
> kernel/locking/spinlock_rt.c:
> 
> "The RT [spinlock] substitutions explicitly disable migration and take
> rcu_read_lock() across the lock held section."

Nice! Then the migrate_disable from __local_lock and friends seems unnecessary as
well

#define __local_lock(__lock)                                    \
        do {                                                    \
                migrate_disable();                              \
                spin_lock(this_cpu_ptr((__lock)));              \
        } while (0)

Since:

static __always_inline void __rt_spin_lock(spinlock_t *lock)
{
        rtlock_might_resched();
        rtlock_lock(&lock->lock); 
        rcu_read_lock();
        migrate_disable();
}

Will resend -v2.


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ