[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <t6w7bzhdy6vywc4gzowdoe2vliwl7sdju6umrti5rscjyd2uns@pquelrkaywjn>
Date: Thu, 13 Feb 2025 09:52:07 +0900
From: Sergey Senozhatsky <senozhatsky@...omium.org>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Sergey Senozhatsky <senozhatsky@...omium.org>,
Yosry Ahmed <yosry.ahmed@...ux.dev>, Kairui Song <ryncsn@...il.com>, Minchan Kim <minchan@...nel.org>,
linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v5 01/18] zram: sleepable entry locking
On (25/02/12 16:08), Andrew Morton wrote:
> > Concurrent modifications of meta table entries is now handled
> > by per-entry spin-lock. This has a number of shortcomings.
> >
> > First, this imposes atomic requirements on compression backends.
> > zram can call both zcomp_compress() and zcomp_decompress() under
> > entry spin-lock, which implies that we can use only compression
> > algorithms that don't schedule/sleep/wait during compression and
> > decompression. This, for instance, makes it impossible to use
> > some of the ASYNC compression algorithms (H/W compression, etc.)
> > implementations.
> >
> > Second, this can potentially trigger watchdogs. For example,
> > entry re-compression with secondary algorithms is performed
> > under entry spin-lock. Given that we chain secondary
> > compression algorithms and that some of them can be configured
> > for best compression ratio (and worst compression speed) zram
> > can stay under spin-lock for quite some time.
> >
> > Having a per-entry mutex (or, for instance, a rw-semaphore)
> > significantly increases sizeof() of each entry and hence the
> > meta table. Therefore entry locking returns back to bit
> > locking, as before, however, this time also preempt-rt friendly,
> > because if waits-on-bit instead of spinning-on-bit. Lock owners
> > are also now permitted to schedule, which is a first step on the
> > path of making zram non-atomic.
> >
> > ...
> >
> > -static int zram_slot_trylock(struct zram *zram, u32 index)
> > +static void zram_slot_lock_init(struct zram *zram, u32 index)
> > {
> > - return spin_trylock(&zram->table[index].lock);
> > +#ifdef CONFIG_DEBUG_LOCK_ALLOC
> > + lockdep_init_map(&zram->table[index].lockdep_map, "zram-entry->lock",
> > + &zram->table_lockdep_key, 0);
> > +#endif
> > +}
> > +
> >
> > ...
> >
> > +#ifdef CONFIG_DEBUG_LOCK_ALLOC
> > + lockdep_register_key(&zram->table_lockdep_key);
> > +#endif
> > +
>
> Please check whether all the ifdefs are needed - some of these things
> have CONFIG_LOCKDEP=n stubs.
Will do.
Powered by blists - more mailing lists