[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CALe3CaCZD=wogTEA-uaRpawZcyiaBcOC8sDw_aOi_9xeKi=RFw@mail.gmail.com>
Date: Thu, 8 Jan 2026 16:54:08 +0800
From: Su Hua <suhua.tanke@...il.com>
To: Michal Hocko <mhocko@...e.com>
Cc: akpm@...ux-foundation.org, vbabka@...e.cz, surenb@...gle.com,
jackmanb@...gle.com, hannes@...xchg.org, ziy@...dia.com, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, Hua Su <huasu@...omi.com>
Subject: Re: [PATCH] mm: fix execution order in alloc_contig_range_noprof
Michal Hocko <mhocko@...e.com> 于2026年1月6日周二 22:55写道:
>
> On Mon 05-01-26 21:32:50, Hua Su wrote:
> > Fix the execution order issue in alloc_contig_range_noprof where
> > drain_all_pages was called after start_isolate_page_range, which
> > may be lead to race conditions.
> >
> > Based on community patches commit ec6e8c7e0314 ("mm, page_alloc:
> > disable pcplists during memory offline") and commit d479960e44f27
> > ("mm: disable LRU pagevec during the migration temporarily"), we
> > disable pcplists and LRU cache before page isolation to ensure no
> > pages are left in per-cpu lists during isolation.
>
> What exactly is the problem you are trying to fix here? Is this based on
> code review or are you hitting any real problem. I find the changelog
> rather hard to grasp.
This submission was discovered during code review. However, it seems
I made a mistake.
Disable LRU pagevec is for the problem where migrate page might fail
(origin code has already disabled). It seems not related to the isolated page
operation.
>
> > Signed-off-by: Hua Su <huasu@...omi.com>
> > Signed-off-by: Hua Su <suhua.tanke@...il.com>
> > ---
> > mm/page_alloc.c | 10 +++++-----
> > 1 file changed, 5 insertions(+), 5 deletions(-)
> >
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index 6a47443c48ff..d08f929ca64c 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -6815,8 +6815,6 @@ static int __alloc_contig_migrate_range(struct compact_control *cc,
> > .reason = MR_CONTIG_RANGE,
> > };
> >
> > - lru_cache_disable();
> > -
> > while (pfn < end || !list_empty(&cc->migratepages)) {
> > if (fatal_signal_pending(current)) {
> > ret = -EINTR;
> > @@ -6850,7 +6848,6 @@ static int __alloc_contig_migrate_range(struct compact_control *cc,
> > break;
> > }
> >
> > - lru_cache_enable();
> > if (ret < 0) {
> > if (!(cc->gfp_mask & __GFP_NOWARN) && ret == -EBUSY)
> > alloc_contig_dump_pages(&cc->migratepages);
> > @@ -6973,6 +6970,9 @@ int alloc_contig_range_noprof(unsigned long start, unsigned long end,
> > if (__alloc_contig_verify_gfp_mask(gfp_mask, (gfp_t *)&cc.gfp_mask))
> > return -EINVAL;
> >
> > + zone_pcp_disable(cc.zone);
> > + lru_cache_disable();
> > +
> > /*
> > * What we do here is we mark all pageblocks in range as
> > * MIGRATE_ISOLATE. Because pageblock and max order pages may
> > @@ -6998,8 +6998,6 @@ int alloc_contig_range_noprof(unsigned long start, unsigned long end,
> > if (ret)
> > goto done;
> >
> > - drain_all_pages(cc.zone);
> > -
> > /*
> > * In case of -EBUSY, we'd like to know which page causes problem.
> > * So, just fall through. test_pages_isolated() has a tracepoint
> > @@ -7076,6 +7074,8 @@ int alloc_contig_range_noprof(unsigned long start, unsigned long end,
> > start, end, outer_start, outer_end);
> > }
> > done:
> > + lru_cache_enable();
> > + zone_pcp_enable(cc.zone);
> > undo_isolate_page_range(start, end);
> > return ret;
> > }
> > --
> > 2.34.1
>
> --
> Michal Hocko
> SUSE Labs
Powered by blists - more mailing lists