[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b2e7ea31-0a56-6415-474b-a952fb1d36ef@huawei.com>
Date: Wed, 9 Feb 2022 19:26:29 +0800
From: Xiongfeng Wang <wangxiongfeng2@...wei.com>
To: Nicolas Saenz Julienne <nsaenzju@...hat.com>,
<akpm@...ux-foundation.org>
CC: <linux-kernel@...r.kernel.org>, <linux-mm@...ck.org>,
<frederic@...nel.org>, <tglx@...utronix.de>, <mtosatti@...hat.com>,
<mgorman@...e.de>, <linux-rt-users@...r.kernel.org>,
<vbabka@...e.cz>, <cl@...ux.com>, <paulmck@...nel.org>,
<willy@...radead.org>
Subject: Re: [PATCH 0/2] mm/page_alloc: Remote per-cpu lists drain support
Hi,
On 2022/2/9 17:45, Nicolas Saenz Julienne wrote:
> Hi Xiongfeng, thanks for taking the time to look at this.
>
> On Wed, 2022-02-09 at 16:55 +0800, Xiongfeng Wang wrote:
>> Hi Nicolas,
>>
>> When I applied the patchset on the following commit and tested on QEMU, I came
>> accross the following CallTrace.
>> commit dd81e1c7d5fb126e5fbc5c9e334d7b3ec29a16a0
>>
>> I wrote a userspace application to consume the memory. When the memory is used
>> out, the OOM killer is triggered and the following Calltrace is printed. I am
>> not sure if it is related to this patchset. But when I reverted this patchset,
>> the 'NULL pointer' Calltrace didn't show.
>
> It's a silly mistake on my part, while cleaning up the code I messed up one of
> the 'struct per_cpu_pages' accessors. This should fix it:
>
> ------------------------->8-------------------------
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 0caa7155ca34..e65b991c3dc8 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -3279,7 +3279,7 @@ static void __drain_all_pages(struct zone *zone, bool force_all_cpus)
> has_pcps = true;
> } else {
> for_each_populated_zone(z) {
> - pcp = per_cpu_ptr(zone->per_cpu_pageset, cpu);
> + pcp = per_cpu_ptr(z->per_cpu_pageset, cpu);
> lp = rcu_dereference_protected(pcp->lp,
> mutex_is_locked(&pcpu_drain_mutex));
> if (lp->count) {
I have tested it. It works well. No more 'NULL pointer' Calltrace.
Thanks,
Xiongfeng
> ------------------------->8-------------------------
>
> Thanks!
>
Powered by blists - more mailing lists