[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20221205102721.zitekvbhvylogtnv@techsingularity.net>
Date: Mon, 5 Dec 2022 10:27:21 +0000
From: Mel Gorman <mgorman@...hsingularity.net>
To: NeilBrown <neilb@...e.de>
Cc: Linux-MM <linux-mm@...ck.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Michal Hocko <mhocko@...e.com>,
Thierry Reding <thierry.reding@...il.com>,
Matthew Wilcox <willy@...radead.org>,
Vlastimil Babka <vbabka@...e.cz>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 3/6] mm/page_alloc: Explicitly record high-order atomic
allocations in alloc_flags
On Mon, Dec 05, 2022 at 04:17:12PM +1100, NeilBrown wrote:
>
> Hi Mel,
> thanks a lot for doing this!
My pleasure.
> I tried reviewing it but "HIGHATOMIC" is new to me and I quickly got
> lost :-(
That's ok, HIGHATOMIC reserves are obscure and internal to the
allocator. It's almost as obscure as granting access to reserves for RT
tasks with the only difference being a lack of data on what sort of RT
tasks needed extra privileges back in the early 2000's but I happened to
remember why highatomic reserves were introduced. It was introduced when
fragmentation avoidance triggered high-order atomic allocations failures
that "happened to work" by accident before fragmentation avoidance (details
in 0aaa29a56e4f). IIRC, there were a storm of bugs, mostly on small embedded
platforms where devices without an IOMMU relied on high-order atomics
to work so a small reserve was created. I was concerned your patch would
trigger this class of bug again even though it might take a few years to
show up as embedded platforms tend to stay on old kernels for ages.
The main point of this patch is identifying these high-order atomic
allocations without relying on __GFP_ATOMIC but it should not be necessary
to understand how high-order atomic reserves work. They still work the
same way, access is just granted differently.
> Maybe one day I'll work it out - now that several names are more
> meaningful, it will likely be easier.
>
> > @@ -4818,7 +4820,7 @@ static void wake_all_kswapds(unsigned int order, gfp_t gfp_mask,
> > }
> >
> > static inline unsigned int
> > -gfp_to_alloc_flags(gfp_t gfp_mask)
> > +gfp_to_alloc_flags(gfp_t gfp_mask, unsigned int order)
> > {
> > unsigned int alloc_flags = ALLOC_WMARK_MIN | ALLOC_CPUSET;
> >
> > @@ -4844,8 +4846,13 @@ gfp_to_alloc_flags(gfp_t gfp_mask)
> > * Not worth trying to allocate harder for __GFP_NOMEMALLOC even
> > * if it can't schedule.
> > */
> > - if (!(gfp_mask & __GFP_NOMEMALLOC))
> > + if (!(gfp_mask & __GFP_NOMEMALLOC)) {
> > alloc_flags |= ALLOC_HARDER;
> > +
> > + if (order > 0)
> > + alloc_flags |= ALLOC_HIGHATOMIC;
> > + }
> > +
> > /*
> > * Ignore cpuset mems for GFP_ATOMIC rather than fail, see the
> > * comment for __cpuset_node_allowed().
This is the most crucial hunk two hunks of the patch. If the series is
merged and we start seeing high-order atomic allocations failure, the key
will be to look at the gfp_flags and determine if this hunk is enough to
accurately detect high-order atomic allocations. If the GFP flags look ok,
then a tracing debugging patch will be created to dump gfp_flags every
time access is granted to high-atomic reserves to determine if access is
given incorrectly and under what circumstances.
The main concern I had with your original patch was that it was too
easy to grant access to high-atomic reserves for requests that were not
high-order atomics requests which might exhaust the reserves. The rest of
the series tries to improve the naming of the ALLOC flags and what they
mean. The actual changes to your patch are minimal. I could have started
with your patch and fixed it up but I preferred this ordering to reduce
use of __GFP_ATOMIC and then delete it. It should be bisection safe.
--
Mel Gorman
SUSE Labs
Powered by blists - more mailing lists