[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID:
<SA1PR21MB1317797B68A7AFCD8D75650ABFB42@SA1PR21MB1317.namprd21.prod.outlook.com>
Date: Fri, 26 Jul 2024 00:01:33 +0000
From: Dexuan Cui <decui@...rosoft.com>
To: Saurabh Singh Sengar <ssengar@...ux.microsoft.com>, Nuno Das Neves
<nunodasneves@...ux.microsoft.com>
CC: KY Srinivasan <kys@...rosoft.com>, Haiyang Zhang <haiyangz@...rosoft.com>,
"wei.liu@...nel.org" <wei.liu@...nel.org>, "linux-hyperv@...r.kernel.org"
<linux-hyperv@...r.kernel.org>, "linux-kernel@...r.kernel.org"
<linux-kernel@...r.kernel.org>, Saurabh Singh Sengar <ssengar@...rosoft.com>,
"srivatsa@...il.mit.edu" <srivatsa@...il.mit.edu>
Subject: RE: [PATCH] Drivers: hv: vmbus: Deferring per cpu tasks
> From: Saurabh Singh Sengar <ssengar@...ux.microsoft.com>
> Sent: Thursday, July 25, 2024 8:35 AM
> Subject: Re: [PATCH] Drivers: hv: vmbus: Deferring per cpu tasks
Without the patch, I think the current CPU uses IPIs to let the other
CPUs, one by one, run the function calls, and synchronously waits
for the function calls to finish.
IMO the patch is not "Deferring per cpu tasks". "Defer" means "let it
happen later". Here it schedules work items to different CPUs, and
the work items immediately start to run on these CPUs.
I would suggest a more accurate subject:
Drivers: hv: vmbus: Run hv_synic_init() concurrently
> - ret = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN,
> "hyperv/vmbus:online",
> - hv_synic_init, hv_synic_cleanup);
> + cpus_read_lock();
> + for_each_online_cpu(cpu) {
> + struct work_struct *work = per_cpu_ptr(works, cpu);
> +
> + INIT_WORK(work, vmbus_percpu_work);
> + schedule_work_on(cpu, work);
> + }
> +
> + for_each_online_cpu(cpu)
> + flush_work(per_cpu_ptr(works, cpu));
> +
Can you please add a comment to explain we need this for CPU online/offline'ing:
> + ret = __cpuhp_setup_state_cpuslocked(CPUHP_AP_ONLINE_DYN,
> "hyperv/vmbus:online", false,
> + hv_synic_init, hv_synic_cleanup,
> false);
> + cpus_read_unlock();
Add an empty line here to make it slightly more readable? :-)
> + free_percpu(works);
> if (ret < 0)
> goto err_alloc;
Thanks,
Dexuan
Powered by blists - more mailing lists