[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230602190115.545766386@redhat.com>
Date: Fri, 02 Jun 2023 15:58:00 -0300
From: Marcelo Tosatti <mtosatti@...hat.com>
To: Christoph Lameter <cl@...ux.com>
Cc: Aaron Tomlin <atomlin@...mlin.com>,
Frederic Weisbecker <frederic@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
Vlastimil Babka <vbabka@...e.cz>,
Michal Hocko <mhocko@...e.com>,
Marcelo Tosatti <mtosatti@...hat.com>
Subject: [PATCH v2 3/3] mm/vmstat: do not refresh stats for nohz_full CPUs
The interruption caused by queueing work on nohz_full CPUs
is undesirable for certain aplications.
Fix by not refreshing per-CPU stats of nohz_full CPUs.
Signed-off-by: Marcelo Tosatti <mtosatti@...hat.com>
---
v2: opencode schedule_on_each_cpu (Michal Hocko)
Index: linux-vmstat-remote/mm/vmstat.c
===================================================================
--- linux-vmstat-remote.orig/mm/vmstat.c
+++ linux-vmstat-remote/mm/vmstat.c
@@ -1881,8 +1881,13 @@ int vmstat_refresh(struct ctl_table *tab
void *buffer, size_t *lenp, loff_t *ppos)
{
long val;
- int err;
int i;
+ int cpu;
+ struct work_struct __percpu *works;
+
+ works = alloc_percpu(struct work_struct);
+ if (!works)
+ return -ENOMEM;
/*
* The regular update, every sysctl_stat_interval, may come later
@@ -1896,9 +1901,24 @@ int vmstat_refresh(struct ctl_table *tab
* transiently negative values, report an error here if any of
* the stats is negative, so we know to go looking for imbalance.
*/
- err = schedule_on_each_cpu(refresh_vm_stats);
- if (err)
- return err;
+ cpus_read_lock();
+ for_each_online_cpu(cpu) {
+ struct work_struct *work;
+
+ if (cpu_is_isolated(cpu))
+ continue;
+ work = per_cpu_ptr(works, cpu);
+ INIT_WORK(work, refresh_vm_stats);
+ schedule_work_on(cpu, work);
+ }
+
+ for_each_online_cpu(cpu) {
+ if (cpu_is_isolated(cpu))
+ continue;
+ flush_work(per_cpu_ptr(works, cpu));
+ }
+ cpus_read_unlock();
+ free_percpu(works);
for (i = 0; i < NR_VM_ZONE_STAT_ITEMS; i++) {
/*
* Skip checking stats known to go negative occasionally.
Powered by blists - more mailing lists