lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <wnffho5jguo24wfy3qv5tvovoargezbu4kcvpk43ludrhyfo6i@6ogtvk5ivfjc>
Date: Fri, 7 Feb 2025 11:48:55 +0900
From: Sergey Senozhatsky <senozhatsky@...omium.org>
To: Yosry Ahmed <yosry.ahmed@...ux.dev>
Cc: Sergey Senozhatsky <senozhatsky@...omium.org>, 
	Andrew Morton <akpm@...ux-foundation.org>, Minchan Kim <minchan@...nel.org>, linux-mm@...ck.org, 
	linux-kernel@...r.kernel.org
Subject: Re: [PATCHv4 14/17] zsmalloc: make zspage lock preemptible

On (25/02/06 16:19), Yosry Ahmed wrote:
> > static void zspage_read_lock(struct zspage *zspage)
> > {
> >         atomic_t *lock = &zspage->lock;
> >         int old = atomic_read_acquire(lock);
> > 
> >         do {
> >                 if (old == ZS_PAGE_WRLOCKED) {
> >                         cpu_relax();
> >                         old = atomic_read_acquire(lock);
> >                         continue;
> >                 }
> >         } while (!atomic_try_cmpxchg_acquire(lock, &old, old + 1));
> > 
> > #ifdef CONFIG_DEBUG_LOCK_ALLOC
> >         rwsem_acquire_read(&zspage->lockdep_map, 0, 0, _RET_IP_);
> > #endif
> > }
> > 
> > static void zspage_read_unlock(struct zspage *zspage)
> > {
> >         atomic_dec_return_release(&zspage->lock);
> > 
> > #ifdef CONFIG_DEBUG_LOCK_ALLOC
> >         rwsem_release(&zspage->lockdep_map, _RET_IP_);
> > #endif
> > }
> > 
> > static bool zspage_try_write_lock(struct zspage *zspage)
> > {
> >         atomic_t *lock = &zspage->lock;
> >         int old = ZS_PAGE_UNLOCKED;
> > 
> >         preempt_disable();
> >         if (atomic_try_cmpxchg_acquire(lock, &old, ZS_PAGE_WRLOCKED)) {
> > #ifdef CONFIG_DEBUG_LOCK_ALLOC
> >                 rwsem_acquire(&zspage->lockdep_map, 0, 0, _RET_IP_);
> > #endif
> >                 return true;
> >         }
> > 
> >         preempt_enable();
> >         return false;
> > }
> > 
> > static void zspage_write_unlock(struct zspage *zspage)
> > {
> >         atomic_set_release(&zspage->lock, ZS_PAGE_UNLOCKED);
> > #ifdef CONFIG_DEBUG_LOCK_ALLOC
> >         rwsem_release(&zspage->lockdep_map, _RET_IP_);
> > #endif
> >         preempt_enable();
> > }
> > ---
> > 
> > Maybe I'll just copy-paste the locking rules list, a list is always cleaner.
> 
> Thanks. I think it would be nice if we could also get someone with
> locking expertise to take a look at this.

Sure.

I moved the lockdep acquire/release before atomic ops (except for try),
as was suggested by Sebastian in zram sub-thread.

[..]
> Seems like we have to compromise either way, custom locking or we enter
> into a new complexity realm with RCU freeing.

Let's take the blue pill? :)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ