[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <toahcdrrcijxi5atfblz5q6o47j4mbkpe2lpvbbp5yczsdj6j2@2lbc43nhdbgt>
Date: Thu, 27 Feb 2025 22:20:39 +0900
From: Sergey Senozhatsky <senozhatsky@...omium.org>
To: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Cc: Sergey Senozhatsky <senozhatsky@...omium.org>,
Andrew Morton <akpm@...ux-foundation.org>, Yosry Ahmed <yosry.ahmed@...ux.dev>,
Hillf Danton <hdanton@...a.com>, Kairui Song <ryncsn@...il.com>, Minchan Kim <minchan@...nel.org>,
linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v8 01/17] zram: sleepable entry locking
On (25/02/27 14:12), Sebastian Andrzej Siewior wrote:
> > > I see. I thought that they key was "shared" between zram meta table
> > > entries because the key is per-zram device, which sort of made sense
> > > (we can have different zram devices in a system - one swap, a bunch
> > > mounted with various file-systems on them).
>
> Yes. So usually you do spin_lock_init() and this creates a key at _this_
> very position. So every lock initialized at this position shares the
> same class/ the same pattern.
>
> > So the lock class is registered dynamically for each zram device
> >
> > zram_add()
> > lockdep_register_key(&zram->lock_class);
> >
> > and then we use that zram->lock_class to init zram->table entries.
> >
> > We unregister the lock_class during each zram device destruction
> >
> > zram_remove()
> > lockdep_unregister_key(&zram->lock_class);
> >
> > Does this still put zram->table entries into different lock classes?
>
> You shouldn't need to register and unregister the lock_class. What you
> do should match for instance j_trans_commit_map in fs/jbd2/journal.c or
> __key in include/linux/rhashtable.h & lib/rhashtable.c.
I see, thank you.
Let me try static keys then (in zram and in zsmalloc). Will need
a day or two to re-run the tests, and then will send out an updated
series.
Powered by blists - more mailing lists