lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <Z6Z2l9uovxAiED6q@google.com>
Date: Fri, 7 Feb 2025 21:09:43 +0000
From: Yosry Ahmed <yosry.ahmed@...ux.dev>
To: Sergey Senozhatsky <senozhatsky@...omium.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
	Minchan Kim <minchan@...nel.org>, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org, Kairui Song <ryncsn@...il.com>
Subject: Re: [PATCHv4 14/17] zsmalloc: make zspage lock preemptible

On Fri, Feb 07, 2025 at 11:48:55AM +0900, Sergey Senozhatsky wrote:
> On (25/02/06 16:19), Yosry Ahmed wrote:
> > > static void zspage_read_lock(struct zspage *zspage)
> > > {
> > >         atomic_t *lock = &zspage->lock;
> > >         int old = atomic_read_acquire(lock);
> > > 
> > >         do {
> > >                 if (old == ZS_PAGE_WRLOCKED) {
> > >                         cpu_relax();
> > >                         old = atomic_read_acquire(lock);
> > >                         continue;
> > >                 }
> > >         } while (!atomic_try_cmpxchg_acquire(lock, &old, old + 1));
> > > 
> > > #ifdef CONFIG_DEBUG_LOCK_ALLOC
> > >         rwsem_acquire_read(&zspage->lockdep_map, 0, 0, _RET_IP_);
> > > #endif
> > > }
> > > 
> > > static void zspage_read_unlock(struct zspage *zspage)
> > > {
> > >         atomic_dec_return_release(&zspage->lock);
> > > 
> > > #ifdef CONFIG_DEBUG_LOCK_ALLOC
> > >         rwsem_release(&zspage->lockdep_map, _RET_IP_);
> > > #endif
> > > }
> > > 
> > > static bool zspage_try_write_lock(struct zspage *zspage)
> > > {
> > >         atomic_t *lock = &zspage->lock;
> > >         int old = ZS_PAGE_UNLOCKED;
> > > 
> > >         preempt_disable();
> > >         if (atomic_try_cmpxchg_acquire(lock, &old, ZS_PAGE_WRLOCKED)) {
> > > #ifdef CONFIG_DEBUG_LOCK_ALLOC
> > >                 rwsem_acquire(&zspage->lockdep_map, 0, 0, _RET_IP_);
> > > #endif
> > >                 return true;
> > >         }
> > > 
> > >         preempt_enable();
> > >         return false;
> > > }
> > > 
> > > static void zspage_write_unlock(struct zspage *zspage)
> > > {
> > >         atomic_set_release(&zspage->lock, ZS_PAGE_UNLOCKED);
> > > #ifdef CONFIG_DEBUG_LOCK_ALLOC
> > >         rwsem_release(&zspage->lockdep_map, _RET_IP_);
> > > #endif
> > >         preempt_enable();
> > > }
> > > ---
> > > 
> > > Maybe I'll just copy-paste the locking rules list, a list is always cleaner.
> > 
> > Thanks. I think it would be nice if we could also get someone with
> > locking expertise to take a look at this.
> 
> Sure.
> 
> I moved the lockdep acquire/release before atomic ops (except for try),
> as was suggested by Sebastian in zram sub-thread.
> 
> [..]
> > Seems like we have to compromise either way, custom locking or we enter
> > into a new complexity realm with RCU freeing.
> 
> Let's take the blue pill? :)

Can we do some perf testing to make sure this custom locking is not
regressing performance (selfishly I'd like some zswap testing too)?

Perhaps Kairui can help with that since he was already testing this
series.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ