[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8de6a220-45a3-4885-890f-0538522e620c@linux.alibaba.com>
Date: Tue, 6 May 2025 11:30:14 +0800
From: Gao Xiang <hsiangkao@...ux.alibaba.com>
To: Sandeep Dhavale <dhavale@...gle.com>, linux-erofs@...ts.ozlabs.org,
Gao Xiang <xiang@...nel.org>, Chao Yu <chao@...nel.org>,
Yue Hu <zbestahu@...il.com>, Jeffle Xu <jefflexu@...ux.alibaba.com>
Cc: kernel-team@...roid.com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v5] erofs: lazily initialize per-CPU workers and CPU
hotplug hooks
Hi Sandeep,
On 2025/5/2 02:30, Sandeep Dhavale wrote:
> Currently, when EROFS is built with per-CPU workers, the workers are
> started and CPU hotplug hooks are registered during module initialization.
> This leads to unnecessary worker start/stop cycles during CPU hotplug
> events, particularly on Android devices that frequently suspend and resume.
>
> This change defers the initialization of per-CPU workers and the
> registration of CPU hotplug hooks until the first EROFS mount. This
> ensures that these resources are only allocated and managed when EROFS is
> actually in use.
>
> The tear down of per-CPU workers and unregistration of CPU hotplug hooks
> still occurs during z_erofs_exit_subsystem(), but only if they were
> initialized.
>
> Signed-off-by: Sandeep Dhavale <dhavale@...gle.com>
> ---
> v4: https://lore.kernel.org/linux-erofs/20250423061023.131354-1-dhavale@google.com/
> Changes since v4:
> - remove redundant blank line as suggested by Gao
> - add a log for failure path as suggested by Chao
> - also add success log which will help in case there was a failure
> before, else stale failure log could cause unnecessary concern
>
> fs/erofs/zdata.c | 65 ++++++++++++++++++++++++++++++++++++------------
> 1 file changed, 49 insertions(+), 16 deletions(-)
>
> diff --git a/fs/erofs/zdata.c b/fs/erofs/zdata.c
> index 0671184d9cf1..a5d3aef319b2 100644
> --- a/fs/erofs/zdata.c
> +++ b/fs/erofs/zdata.c
> @@ -291,6 +291,9 @@ static struct workqueue_struct *z_erofs_workqueue __read_mostly;
>
> #ifdef CONFIG_EROFS_FS_PCPU_KTHREAD
> static struct kthread_worker __rcu **z_erofs_pcpu_workers;
> +static atomic_t erofs_percpu_workers_initialized = ATOMIC_INIT(0);
> +static int erofs_cpu_hotplug_init(void);
> +static void erofs_cpu_hotplug_destroy(void);
We could move downwards to avoid those forward declarations;
>
> static void erofs_destroy_percpu_workers(void)
> {
> @@ -336,9 +339,45 @@ static int erofs_init_percpu_workers(void)
> }
> return 0;
> }
> +
> +static int z_erofs_init_pcpu_workers(void)
How about passing in `struct super_block *` here?
Since print messages are introduced, it's much better to
know which instance caused the error/info.
> +{
> + int err;
> +
> + if (atomic_xchg(&erofs_percpu_workers_initialized, 1))
> + return 0;
> +
> + err = erofs_init_percpu_workers();
> + if (err) {
> + erofs_err(NULL, "per-cpu workers: failed to allocate.");
> + goto err_init_percpu_workers;
> + }
> +
> + err = erofs_cpu_hotplug_init();
> + if (err < 0) {
> + erofs_err(NULL, "per-cpu workers: failed CPU hotplug init.");
> + goto err_cpuhp_init;
> + }
> + erofs_info(NULL, "initialized per-cpu workers successfully.");
Otherwise it looks good to me know.
Thanks,
Gao Xiang
Powered by blists - more mailing lists