[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20250212160830.730a199935e907c2498b28d4@linux-foundation.org>
Date: Wed, 12 Feb 2025 16:08:30 -0800
From: Andrew Morton <akpm@...ux-foundation.org>
To: Sergey Senozhatsky <senozhatsky@...omium.org>
Cc: Yosry Ahmed <yosry.ahmed@...ux.dev>, Kairui Song <ryncsn@...il.com>,
Minchan Kim <minchan@...nel.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v5 01/18] zram: sleepable entry locking
On Wed, 12 Feb 2025 15:26:59 +0900 Sergey Senozhatsky <senozhatsky@...omium.org> wrote:
> Concurrent modifications of meta table entries is now handled
> by per-entry spin-lock. This has a number of shortcomings.
>
> First, this imposes atomic requirements on compression backends.
> zram can call both zcomp_compress() and zcomp_decompress() under
> entry spin-lock, which implies that we can use only compression
> algorithms that don't schedule/sleep/wait during compression and
> decompression. This, for instance, makes it impossible to use
> some of the ASYNC compression algorithms (H/W compression, etc.)
> implementations.
>
> Second, this can potentially trigger watchdogs. For example,
> entry re-compression with secondary algorithms is performed
> under entry spin-lock. Given that we chain secondary
> compression algorithms and that some of them can be configured
> for best compression ratio (and worst compression speed) zram
> can stay under spin-lock for quite some time.
>
> Having a per-entry mutex (or, for instance, a rw-semaphore)
> significantly increases sizeof() of each entry and hence the
> meta table. Therefore entry locking returns back to bit
> locking, as before, however, this time also preempt-rt friendly,
> because if waits-on-bit instead of spinning-on-bit. Lock owners
> are also now permitted to schedule, which is a first step on the
> path of making zram non-atomic.
>
> ...
>
> -static int zram_slot_trylock(struct zram *zram, u32 index)
> +static void zram_slot_lock_init(struct zram *zram, u32 index)
> {
> - return spin_trylock(&zram->table[index].lock);
> +#ifdef CONFIG_DEBUG_LOCK_ALLOC
> + lockdep_init_map(&zram->table[index].lockdep_map, "zram-entry->lock",
> + &zram->table_lockdep_key, 0);
> +#endif
> +}
> +
>
> ...
>
> +#ifdef CONFIG_DEBUG_LOCK_ALLOC
> + lockdep_register_key(&zram->table_lockdep_key);
> +#endif
> +
Please check whether all the ifdefs are needed - some of these things
have CONFIG_LOCKDEP=n stubs.
Powered by blists - more mailing lists