[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20221214133302.GA1931356@lothringen>
Date: Wed, 14 Dec 2022 14:33:02 +0100
From: Frederic Weisbecker <frederic@...nel.org>
To: Marcelo Tosatti <mtosatti@...hat.com>
Cc: atomlin@...hat.com, cl@...ux.com, tglx@...utronix.de,
mingo@...nel.org, peterz@...radead.org, pauld@...hat.com,
neelx@...hat.com, oleksandr@...alenko.name,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH v9 3/5] mm/vmstat: manage per-CPU stats from CPU context
when NOHZ full
On Tue, Dec 06, 2022 at 01:18:29PM -0300, Marcelo Tosatti wrote:
> static inline void vmstat_mark_dirty(void)
> {
> + int cpu = smp_processor_id();
> +
> + if (tick_nohz_full_cpu(cpu) && !this_cpu_read(vmstat_dirty)) {
> + struct delayed_work *dw;
> +
> + dw = &per_cpu(vmstat_work, cpu);
> + if (!delayed_work_pending(dw)) {
> + unsigned long delay;
> +
> + delay = round_jiffies_relative(sysctl_stat_interval);
> + queue_delayed_work_on(cpu, mm_percpu_wq, dw, delay);
Currently the vmstat_work is flushed on cpu_hotplug (CPUHP_AP_ONLINE_DYN).
vmstat_shepherd makes sure to not rearm it afterward. But now it looks
possible for the above to do that mistake?
> + }
> + }
> this_cpu_write(vmstat_dirty, true);
> }
> @@ -2009,6 +2028,10 @@ static void vmstat_shepherd(struct work_
> for_each_online_cpu(cpu) {
> struct delayed_work *dw = &per_cpu(vmstat_work, cpu);
>
> + /* NOHZ full CPUs manage their own vmstat flushing */
> + if (tick_nohz_full_cpu(smp_processor_id()))
It should be the remote CPU instead of the current one.
Thanks.
> + continue;
> +
> if (!delayed_work_pending(dw) && per_cpu(vmstat_dirty, cpu))
> queue_delayed_work_on(cpu, mm_percpu_wq, dw, 0);
>
Powered by blists - more mailing lists