[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <mwpl64zfj4zlv5bwysfzryjpnh6lg5tridhya3t7ly2ax2vt7x@jhmdmh7gwrmn>
Date: Thu, 27 Feb 2025 22:04:16 +0900
From: Sergey Senozhatsky <senozhatsky@...omium.org>
To: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Yosry Ahmed <yosry.ahmed@...ux.dev>, Hillf Danton <hdanton@...a.com>, Kairui Song <ryncsn@...il.com>,
Minchan Kim <minchan@...nel.org>, linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Sergey Senozhatsky <senozhatsky@...omium.org>
Subject: Re: [PATCH v8 01/17] zram: sleepable entry locking
On (25/02/27 21:42), Sergey Senozhatsky wrote:
> > ach. Got it. What about
> >
> > | static void zram_slot_lock_init(struct zram *zram, u32 index)
> > | {
> > | static struct lock_class_key __key;
> > |
> > | lockdep_init_map(slot_dep_map(zram, index),
> > | "zram->table[index].lock",
> > | &__key, 0);
> > | }
> >
> > So every lock coming from zram belongs to the same class. Otherwise each
> > lock coming from zram_slot_lock_init() would belong to a different class
> > and for lockdep it would look like they are different locks. But they
> > are used always in the same way.
>
> I see. I thought that they key was "shared" between zram meta table
> entries because the key is per-zram device, which sort of made sense
> (we can have different zram devices in a system - one swap, a bunch
> mounted with various file-systems on them).
So the lock class is registered dynamically for each zram device
zram_add()
lockdep_register_key(&zram->lock_class);
and then we use that zram->lock_class to init zram->table entries.
We unregister the lock_class during each zram device destruction
zram_remove()
lockdep_unregister_key(&zram->lock_class);
Does this still put zram->table entries into different lock classes?
Powered by blists - more mailing lists