[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z1AdotZfAJG-zVZX@localhost.localdomain>
Date: Wed, 4 Dec 2024 10:15:14 +0100
From: Oscar Salvador <osalvador@...e.de>
To: Vlastimil Babka <vbabka@...e.cz>
Cc: David Hildenbrand <david@...hat.com>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, linuxppc-dev@...ts.ozlabs.org,
Andrew Morton <akpm@...ux-foundation.org>, Zi Yan <ziy@...dia.com>,
Michael Ellerman <mpe@...erman.id.au>,
Nicholas Piggin <npiggin@...il.com>,
Christophe Leroy <christophe.leroy@...roup.eu>,
Naveen N Rao <naveen@...nel.org>,
Madhavan Srinivasan <maddy@...ux.ibm.com>
Subject: Re: [PATCH RESEND v2 4/6] mm/page_alloc: sort out the
alloc_contig_range() gfp flags mess
On Wed, Dec 04, 2024 at 10:03:28AM +0100, Vlastimil Babka wrote:
> On 12/4/24 09:59, Oscar Salvador wrote:
> > On Tue, Dec 03, 2024 at 08:19:02PM +0100, David Hildenbrand wrote:
> >> It was always set using "GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL",
> >> and I removed the same flag combination in #2 from memory offline code, and
> >> we do have the exact same thing in do_migrate_range() in
> >> mm/memory_hotplug.c.
> >>
> >> We should investigate if__GFP_HARDWALL is the right thing to use here, and
> >> if we can get rid of that by switching to GFP_KERNEL in all these places.
> >
> > Why would not we want __GFP_HARDWALL set?
> > Without it, we could potentially migrate the page to a node which is not
> > part of the cpuset of the task that originally allocated it, thus violating the
> > policy? Is not that a problem?
>
> The task doing the alloc_contig_range() will likely not be the same task as
> the one that originally allocated the page, so its policy would be
> different, no? So even with __GFP_HARDWALL we might be already migrating
> outside the original tasks's constraint? Am I missing something?
Yes, that is right, I thought we derive the policy from the old page
somehow when migrating it, but reading the code does not seem to be the
case.
Looking at prepare_alloc_pages(), if !ac->nodemask, which would be the
case here, we would get the policy from the current task
(alloc_contig_range()) when cpusets are enabled.
So yes, I am a bit puzzled why __GFP_HARDWALL was chosen in the first
place.
--
Oscar Salvador
SUSE Labs
Powered by blists - more mailing lists