[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20240906031151.80719-1-dennis@kernel.org>
Date: Thu, 5 Sep 2024 20:11:51 -0700
From: Dennis Zhou <dennis@...nel.org>
To: Tejun Heo <tj@...nel.org>,
Christoph Lameter <cl@...ux.com>
Cc: linux-mm@...ck.org,
linux-kernel@...r.kernel.org,
Dennis Zhou <dennis@...nel.org>,
kernel test robot <oliver.sang@...el.com>
Subject: [PATCH] percpu: fix data race with pcpu_nr_empty_pop_pages
Fixes the data race by moving the read to be behind the pcpu_lock. This
is okay because the code (initially) above it will not increase the
empty populated page count because it is populating backing pages that
already have allocations served out of them.
Reported-by: kernel test robot <oliver.sang@...el.com>
Closes: https://lore.kernel.org/oe-lkp/202407191651.f24e499d-oliver.sang@intel.com
Signed-off-by: Dennis Zhou <dennis@...nel.org>
---
mm/percpu.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/mm/percpu.c b/mm/percpu.c
index 20d91af8c033..325fb8412e90 100644
--- a/mm/percpu.c
+++ b/mm/percpu.c
@@ -1864,6 +1864,10 @@ void __percpu *pcpu_alloc_noprof(size_t size, size_t align, bool reserved,
area_found:
pcpu_stats_area_alloc(chunk, size);
+
+ if (pcpu_nr_empty_pop_pages < PCPU_EMPTY_POP_PAGES_LOW)
+ pcpu_schedule_balance_work();
+
spin_unlock_irqrestore(&pcpu_lock, flags);
/* populate if not all pages are already there */
@@ -1891,9 +1895,6 @@ void __percpu *pcpu_alloc_noprof(size_t size, size_t align, bool reserved,
mutex_unlock(&pcpu_alloc_mutex);
}
- if (pcpu_nr_empty_pop_pages < PCPU_EMPTY_POP_PAGES_LOW)
- pcpu_schedule_balance_work();
-
/* clear the areas and return address relative to base address */
for_each_possible_cpu(cpu)
memset((void *)pcpu_chunk_addr(chunk, cpu, 0) + off, 0, size);
--
2.43.0
Powered by blists - more mailing lists