lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-Id: <20230807110936.21819-41-zhengqi.arch@bytedance.com> Date: Mon, 7 Aug 2023 19:09:28 +0800 From: Qi Zheng <zhengqi.arch@...edance.com> To: akpm@...ux-foundation.org, david@...morbit.com, tkhai@...ru, vbabka@...e.cz, roman.gushchin@...ux.dev, djwong@...nel.org, brauner@...nel.org, paulmck@...nel.org, tytso@....edu, steven.price@....com, cel@...nel.org, senozhatsky@...omium.org, yujie.liu@...el.com, gregkh@...uxfoundation.org, muchun.song@...ux.dev, simon.horman@...igine.com, dlemoal@...nel.org Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org, x86@...nel.org, kvm@...r.kernel.org, xen-devel@...ts.xenproject.org, linux-erofs@...ts.ozlabs.org, linux-f2fs-devel@...ts.sourceforge.net, cluster-devel@...hat.com, linux-nfs@...r.kernel.org, linux-mtd@...ts.infradead.org, rcu@...r.kernel.org, netdev@...r.kernel.org, dri-devel@...ts.freedesktop.org, linux-arm-msm@...r.kernel.org, dm-devel@...hat.com, linux-raid@...r.kernel.org, linux-bcache@...r.kernel.org, virtualization@...ts.linux-foundation.org, linux-fsdevel@...r.kernel.org, linux-ext4@...r.kernel.org, linux-xfs@...r.kernel.org, linux-btrfs@...r.kernel.org, Qi Zheng <zhengqi.arch@...edance.com>, Muchun Song <songmuchun@...edance.com> Subject: [PATCH v4 40/48] zsmalloc: dynamically allocate the mm-zspool shrinker In preparation for implementing lockless slab shrink, use new APIs to dynamically allocate the mm-zspool shrinker, so that it can be freed asynchronously using kfree_rcu(). Then it doesn't need to wait for RCU read-side critical section when releasing the struct zs_pool. Signed-off-by: Qi Zheng <zhengqi.arch@...edance.com> Reviewed-by: Muchun Song <songmuchun@...edance.com> --- mm/zsmalloc.c | 28 ++++++++++++++++------------ 1 file changed, 16 insertions(+), 12 deletions(-) diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index b58f957429f0..1909234bb345 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -229,7 +229,7 @@ struct zs_pool { struct zs_pool_stats stats; /* Compact classes */ - struct shrinker shrinker; + struct shrinker *shrinker; #ifdef CONFIG_ZSMALLOC_STAT struct dentry *stat_dentry; @@ -2086,8 +2086,7 @@ static unsigned long zs_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc) { unsigned long pages_freed; - struct zs_pool *pool = container_of(shrinker, struct zs_pool, - shrinker); + struct zs_pool *pool = shrinker->private_data; /* * Compact classes and calculate compaction delta. @@ -2105,8 +2104,7 @@ static unsigned long zs_shrinker_count(struct shrinker *shrinker, int i; struct size_class *class; unsigned long pages_to_free = 0; - struct zs_pool *pool = container_of(shrinker, struct zs_pool, - shrinker); + struct zs_pool *pool = shrinker->private_data; for (i = ZS_SIZE_CLASSES - 1; i >= 0; i--) { class = pool->size_class[i]; @@ -2121,18 +2119,24 @@ static unsigned long zs_shrinker_count(struct shrinker *shrinker, static void zs_unregister_shrinker(struct zs_pool *pool) { - unregister_shrinker(&pool->shrinker); + shrinker_free(pool->shrinker); } static int zs_register_shrinker(struct zs_pool *pool) { - pool->shrinker.scan_objects = zs_shrinker_scan; - pool->shrinker.count_objects = zs_shrinker_count; - pool->shrinker.batch = 0; - pool->shrinker.seeks = DEFAULT_SEEKS; + pool->shrinker = shrinker_alloc(0, "mm-zspool:%s", pool->name); + if (!pool->shrinker) + return -ENOMEM; + + pool->shrinker->scan_objects = zs_shrinker_scan; + pool->shrinker->count_objects = zs_shrinker_count; + pool->shrinker->batch = 0; + pool->shrinker->seeks = DEFAULT_SEEKS; + pool->shrinker->private_data = pool; - return register_shrinker(&pool->shrinker, "mm-zspool:%s", - pool->name); + shrinker_register(pool->shrinker); + + return 0; } static int calculate_zspage_chain_size(int class_size) -- 2.30.2
Powered by blists - more mailing lists