[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAJuCfpHBB+0HG_2ZJ4h683TYJEz__c+L3Z6RZUbzX+7R1_VSNg@mail.gmail.com>
Date: Wed, 23 Apr 2025 08:35:11 -0700
From: Suren Baghdasaryan <surenb@...gle.com>
To: Tianyang Zhang <zhangtianyang@...ngson.cn>
Cc: Harry Yoo <harry.yoo@...cle.com>, akpm@...ux-foundation.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, Vlastimil Babka <vbabka@...e.cz>,
Michal Hocko <mhocko@...e.com>, Brendan Jackman <jackmanb@...gle.com>,
Johannes Weiner <hannes@...xchg.org>, Zi Yan <ziy@...dia.com>
Subject: Re: [PATCH] mm/page_alloc.c: Avoid infinite retries caused by cpuset race
On Tue, Apr 22, 2025 at 7:39 PM Tianyang Zhang
<zhangtianyang@...ngson.cn> wrote:
>
> Hi, Suren
>
> 在 2025/4/22 上午4:28, Suren Baghdasaryan 写道:
> > On Mon, Apr 21, 2025 at 3:00 AM Harry Yoo <harry.yoo@...cle.com> wrote:
> >> On Wed, Apr 16, 2025 at 04:24:05PM +0800, Tianyang Zhang wrote:
> >>> __alloc_pages_slowpath has no change detection for ac->nodemask
> >>> in the part of retry path, while cpuset can modify it in parallel.
> >>> For some processes that set mempolicy as MPOL_BIND, this results
> >>> ac->nodemask changes, and then the should_reclaim_retry will
> >>> judge based on the latest nodemask and jump to retry, while the
> >>> get_page_from_freelist only traverses the zonelist from
> >>> ac->preferred_zoneref, which selected by a expired nodemask
> >>> and may cause infinite retries in some cases
> >>>
> >>> cpu 64:
> >>> __alloc_pages_slowpath {
> >>> /* ..... */
> >>> retry:
> >>> /* ac->nodemask = 0x1, ac->preferred->zone->nid = 1 */
> >>> if (alloc_flags & ALLOC_KSWAPD)
> >>> wake_all_kswapds(order, gfp_mask, ac);
> >>> /* cpu 1:
> >>> cpuset_write_resmask
> >>> update_nodemask
> >>> update_nodemasks_hier
> >>> update_tasks_nodemask
> >>> mpol_rebind_task
> >>> mpol_rebind_policy
> >>> mpol_rebind_nodemask
> >>> // mempolicy->nodes has been modified,
> >>> // which ac->nodemask point to
> >>>
> >>> */
> >>> /* ac->nodemask = 0x3, ac->preferred->zone->nid = 1 */
> >>> if (should_reclaim_retry(gfp_mask, order, ac, alloc_flags,
> >>> did_some_progress > 0, &no_progress_loops))
> >>> goto retry;
> >>> }
> >>>
> >>> Simultaneously starting multiple cpuset01 from LTP can quickly
> >>> reproduce this issue on a multi node server when the maximum
> >>> memory pressure is reached and the swap is enabled
> >>>
> >>> Signed-off-by: Tianyang Zhang <zhangtianyang@...ngson.cn>
> >>> ---
> >> What commit does it fix and should it be backported to -stable?
> > I think it fixes 902b62810a57 ("mm, page_alloc: fix more premature OOM
> > due to race with cpuset update").
>
> I think this issue is unlikely to have been introduced by Patch
> 902b62810a57 ,
>
> as the infinite-reties section from
>
> https://elixir.bootlin.com/linux/v6.15-rc3/source/mm/page_alloc.c#L4568
> to
> https://elixir.bootlin.com/linux/v6.15-rc3/source/mm/page_alloc.c#L4628
>
> where the cpuset race condition occurs remains unmodified in the logic
> of Patch 902b62810a57.
Yeah, you are right. After looking into it some more, 902b62810a57 is
a wrong patch to blame for this infinite loop.
>
> >> There's a new 'MEMORY MANAGEMENT - PAGE ALLOCATOR' entry (only in
> >> Andrew's mm.git repository now).
> >>
> >> Let's Cc the page allocator folks here!
> >>
> >> --
> >> Cheers,
> >> Harry / Hyeonggon
> >>
> >>> mm/page_alloc.c | 8 ++++++++
> >>> 1 file changed, 8 insertions(+)
> >>>
> >>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> >>> index fd6b865cb1ab..1e82f5214a42 100644
> >>> --- a/mm/page_alloc.c
> >>> +++ b/mm/page_alloc.c
> >>> @@ -4530,6 +4530,14 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
> >>> }
> >>>
> >>> retry:
> >>> + /*
> >>> + * Deal with possible cpuset update races or zonelist updates to avoid
> >>> + * infinite retries.
> >>> + */
> >>> + if (check_retry_cpuset(cpuset_mems_cookie, ac) ||
> >>> + check_retry_zonelist(zonelist_iter_cookie))
> >>> + goto restart;
> >>> +
> > We have this check later in this block:
> > https://elixir.bootlin.com/linux/v6.15-rc3/source/mm/page_alloc.c#L4652,
> > so IIUC you effectively are moving it to be called before
> > should_reclaim_retry(). If so, I think you should remove the old one
> > (the one I linked earlier) as it seems to be unnecessary duplication
> > at this point.
> In my understanding, the code in
>
> https://elixir.bootlin.com/linux/v6.15-rc3/source/mm/page_alloc.c#L4652
>
> was introduced to prevent unnecessary OOM (Out-of-Memory) conditions
> in__alloc_pages_may_oom.
>
> If old code is removed, the newly added code (on retry loop entry)
> cannot guarantee that the cpuset
>
> remains valid when the flow reaches in__alloc_pages_may_oom, especially
> if scheduling occurs during this section.
Well, rescheduling can happen even between
https://elixir.bootlin.com/linux/v6.15-rc3/source/mm/page_alloc.c#L4652
and https://elixir.bootlin.com/linux/v6.15-rc3/source/mm/page_alloc.c#L4657
but I see your point. Also should_reclaim_retry() does not include
zonelist change detection, so keeping the checks at
https://elixir.bootlin.com/linux/v6.15-rc3/source/mm/page_alloc.c#L4652
sounds like a good idea.
>
> Therefore, I think retaining the original code logic is necessary to
> ensure correctness under concurrency.
>
> >
> >
> >>> /* Ensure kswapd doesn't accidentally go to sleep as long as we loop */
> >>> if (alloc_flags & ALLOC_KSWAPD)
> >>> wake_all_kswapds(order, gfp_mask, ac);
> >>> --
> >>> 2.20.1
> >>>
> >>>
> Thanks
>
Powered by blists - more mailing lists