[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <blqetdedmmvheg4qri2o4gxbtiq7lz5ufcpacrrw7yvej7rcnp@i46mdzf7gqhk>
Date: Thu, 30 Jan 2025 13:01:15 +0900
From: Sergey Senozhatsky <senozhatsky@...omium.org>
To: Yosry Ahmed <yosry.ahmed@...ux.dev>
Cc: Sergey Senozhatsky <senozhatsky@...omium.org>,
Andrew Morton <akpm@...ux-foundation.org>, Minchan Kim <minchan@...nel.org>,
Johannes Weiner <hannes@...xchg.org>, Nhat Pham <nphamcs@...il.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCHv1 1/6] zsmalloc: factor out pool locking helpers
On (25/01/29 16:59), Yosry Ahmed wrote:
> On Wed, Jan 29, 2025 at 03:43:47PM +0900, Sergey Senozhatsky wrote:
> > We currently have a mix of migrate_{read,write}_lock() helpers
> > that lock zspages, but it's zs_pool that actually has a ->migrate_lock
> > access to which is opene-coded. Factor out pool migrate locking
> > into helpers, zspage migration locking API will be renamed to
> > reduce confusion.
>
> But why did we remove "migrate" from the helpers' names if this is the
> real migrate lock? It seems like the real problem is how the zspage lock
> helpers are named, which is addressed later.
So this is deliberately. I guess, just like with page-faults in
zs_map_object(), this naming comes from the time when we had fewer
locks and less functionality. These days we have 3 zsmalloc locks that
can "prevent migration" at different points. Even size class spin-lock.
But it's not only about migration anymore. We also now have compaction
(defragmentation) that these locks synchronize. So I want to have a
reader-write naming scheme.
> > struct zs_pool {
> > - const char *name;
> > + /* protect page/zspage migration */
> > + rwlock_t migrate_lock;
> >
> > struct size_class *size_class[ZS_SIZE_CLASSES];
> > struct kmem_cache *handle_cachep;
> > @@ -213,6 +214,7 @@ struct zs_pool {
> > atomic_long_t pages_allocated;
> >
> > struct zs_pool_stats stats;
> > + atomic_t compaction_in_progress;
> >
> > /* Compact classes */
> > struct shrinker *shrinker;
> > @@ -223,11 +225,35 @@ struct zs_pool {
> > #ifdef CONFIG_COMPACTION
> > struct work_struct free_work;
> > #endif
> > - /* protect page/zspage migration */
> > - rwlock_t migrate_lock;
> > - atomic_t compaction_in_progress;
> > +
> > + const char *name;
>
> Are the struct moves relevant to the patch or just to improve
> readability?
Relevance is relative :) That moved from the patch which also
converted the lock to rwsem. I'll move this to a separate
patch.
> I am generally scared to move hot members around unnecessarily
> due to cache-line sharing problems -- although I don't know if
> these are really hot.
Size classes are basically static - once we init the array it
becomes RO. Having migrate_lock at the bottom forces us to
jump over a big RO zs_pool region, besides we never use ->name
(unlike migrate_lock) so having it at offset 0 is unnecessary.
zs_pool_stats and compaction_in_progress are modified only
during compaction, which we do not run in parallel (only one
CPU can do compaction at a time), so we can do something like
---
struct zs_pool {
- const char *name;
+ /* protect page/zspage migration */
+ rwlock_t migrate_lock;
struct size_class *size_class[ZS_SIZE_CLASSES];
- struct kmem_cache *handle_cachep;
- struct kmem_cache *zspage_cachep;
atomic_long_t pages_allocated;
- struct zs_pool_stats stats;
+ struct kmem_cache *handle_cachep;
+ struct kmem_cache *zspage_cachep;
/* Compact classes */
struct shrinker *shrinker;
@@ -223,9 +223,9 @@ struct zs_pool {
#ifdef CONFIG_COMPACTION
struct work_struct free_work;
#endif
- /* protect page/zspage migration */
- rwlock_t migrate_lock;
+ const char *name;
atomic_t compaction_in_progress;
+ struct zs_pool_stats stats;
};
Powered by blists - more mailing lists