[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b5363ced8e17d07d41554da8fde06c410e6688e0.camel@redhat.com>
Date: Wed, 09 Feb 2022 10:45:00 +0100
From: Nicolas Saenz Julienne <nsaenzju@...hat.com>
To: Xiongfeng Wang <wangxiongfeng2@...wei.com>,
akpm@...ux-foundation.org
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org,
frederic@...nel.org, tglx@...utronix.de, mtosatti@...hat.com,
mgorman@...e.de, linux-rt-users@...r.kernel.org, vbabka@...e.cz,
cl@...ux.com, paulmck@...nel.org, willy@...radead.org
Subject: Re: [PATCH 0/2] mm/page_alloc: Remote per-cpu lists drain support
Hi Xiongfeng, thanks for taking the time to look at this.
On Wed, 2022-02-09 at 16:55 +0800, Xiongfeng Wang wrote:
> Hi Nicolas,
>
> When I applied the patchset on the following commit and tested on QEMU, I came
> accross the following CallTrace.
> commit dd81e1c7d5fb126e5fbc5c9e334d7b3ec29a16a0
>
> I wrote a userspace application to consume the memory. When the memory is used
> out, the OOM killer is triggered and the following Calltrace is printed. I am
> not sure if it is related to this patchset. But when I reverted this patchset,
> the 'NULL pointer' Calltrace didn't show.
It's a silly mistake on my part, while cleaning up the code I messed up one of
the 'struct per_cpu_pages' accessors. This should fix it:
------------------------->8-------------------------
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 0caa7155ca34..e65b991c3dc8 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3279,7 +3279,7 @@ static void __drain_all_pages(struct zone *zone, bool force_all_cpus)
has_pcps = true;
} else {
for_each_populated_zone(z) {
- pcp = per_cpu_ptr(zone->per_cpu_pageset, cpu);
+ pcp = per_cpu_ptr(z->per_cpu_pageset, cpu);
lp = rcu_dereference_protected(pcp->lp,
mutex_is_locked(&pcpu_drain_mutex));
if (lp->count) {
------------------------->8-------------------------
Thanks!
--
Nicolás Sáenz
Powered by blists - more mailing lists