[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <45548d9e-4cfa-476d-9eaa-b338f994478c@linux.alibaba.com>
Date: Mon, 31 Mar 2025 10:38:21 +0800
From: Gao Xiang <hsiangkao@...ux.alibaba.com>
To: Sandeep Dhavale <dhavale@...gle.com>
Cc: kernel-team@...roid.com, linux-kernel@...r.kernel.org,
linux-erofs mailing list <linux-erofs@...ts.ozlabs.org>,
Gao Xiang <xiang@...nel.org>, Chao Yu <chao@...nel.org>,
Yue Hu <zbestahu@...il.com>, Jeffle Xu <jefflexu@...ux.alibaba.com>
Subject: Re: [PATCH v1 1/1] erofs: lazily initialize per-CPU workers and CPU
hotplug hooks
Hi Sandeep,
On 2025/3/31 10:20, Sandeep Dhavale wrote:
> Defer initialization of per-CPU workers and registration for CPU hotplug
> events until the first mount. Similarly, unregister from hotplug events
> and destroy per-CPU workers when the last mount is unmounted.
>
> Signed-off-by: Sandeep Dhavale <dhavale@...gle.com>
> ---
> fs/erofs/internal.h | 5 +++++
> fs/erofs/super.c | 27 +++++++++++++++++++++++++++
> fs/erofs/zdata.c | 35 +++++++++++++++++++++++------------
> 3 files changed, 55 insertions(+), 12 deletions(-)
>
> diff --git a/fs/erofs/internal.h b/fs/erofs/internal.h
> index 4ac188d5d894..c88cba4da3eb 100644
> --- a/fs/erofs/internal.h
> +++ b/fs/erofs/internal.h
> @@ -450,6 +450,8 @@ int z_erofs_gbuf_growsize(unsigned int nrpages);
> int __init z_erofs_gbuf_init(void);
> void z_erofs_gbuf_exit(void);
> int z_erofs_parse_cfgs(struct super_block *sb, struct erofs_super_block *dsb);
> +int z_erofs_init_workers(void);
> +void z_erofs_destroy_workers(void);
> #else
> static inline void erofs_shrinker_register(struct super_block *sb) {}
> static inline void erofs_shrinker_unregister(struct super_block *sb) {}
> @@ -458,6 +460,9 @@ static inline void erofs_exit_shrinker(void) {}
> static inline int z_erofs_init_subsystem(void) { return 0; }
> static inline void z_erofs_exit_subsystem(void) {}
> static inline int z_erofs_init_super(struct super_block *sb) { return 0; }
> +static inline int z_erofs_init_workers(void) { return 0; };
> +static inline z_erofs_exit_workers(void);
> +
> #endif /* !CONFIG_EROFS_FS_ZIP */
>
> #ifdef CONFIG_EROFS_FS_BACKED_BY_FILE
> diff --git a/fs/erofs/super.c b/fs/erofs/super.c
> index cadec6b1b554..8e8d3a7c8dba 100644
> --- a/fs/erofs/super.c
> +++ b/fs/erofs/super.c
> @@ -17,6 +17,7 @@
> #include <trace/events/erofs.h>
>
> static struct kmem_cache *erofs_inode_cachep __read_mostly;
> +static atomic_t erofs_mount_count = ATOMIC_INIT(0);
>
> void _erofs_printk(struct super_block *sb, const char *fmt, ...)
> {
> @@ -777,9 +778,28 @@ static const struct fs_context_operations erofs_context_ops = {
> .free = erofs_fc_free,
> };
>
> +static inline int erofs_init_zip_workers_if_needed(void)
> +{
> + int ret;
> +
> + if (atomic_inc_return(&erofs_mount_count) == 1) {
> + ret = z_erofs_init_workers();
> + if (ret)
> + return ret;
> + }
> + return 0;
> +}
> +
> +static inline void erofs_destroy_zip_workers_if_last(void)
Do we really need to destroy workers on the last mount?
it could cause many unnecessary init/uninit cycles.
Or your requirement is just to defer per-CPU workers to
the first mount?
If your case is the latter, I guess you could just call
erofs_init_percpu_workers() in z_erofs_init_super().
> +{
> + if (atomic_dec_and_test(&erofs_mount_count))
So in that case, we won't need erofs_mount_count anymore,
you could just add a pcpu_worker_initialized atomic bool
to control that.
Thanks,
Gao Xiang
Powered by blists - more mailing lists