[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1456356532.25322.9.camel@redhat.com>
Date: Wed, 24 Feb 2016 18:28:52 -0500
From: Rik van Riel <riel@...hat.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: linux-kernel@...r.kernel.org, hannes@...xchg.org, vbabka@...e.cz,
mgorman@...e.de, linux-mm@...ck.org
Subject: Re: [PATCH] mm: limit direct reclaim for higher order allocations
On Wed, 2016-02-24 at 15:02 -0800, Andrew Morton wrote:
> On Wed, 24 Feb 2016 16:38:50 -0500 Rik van Riel <riel@...hat.com>
> wrote:
>
> > For multi page allocations smaller than PAGE_ALLOC_COSTLY_ORDER,
> > the kernel will do direct reclaim if compaction failed for any
> > reason. This worked fine when Linux systems had 128MB RAM, but
> > on my 24GB system I frequently see higher order allocations
> > free up over 3GB of memory, pushing all kinds of things into
> > swap, and slowing down applications.
>
> hm. Seems a pretty obvious flaw - why didn't we notice+fix it
> earlier?
I have heard complaints about suspicious pageout
behaviour before, but had not investigated it
until recently.
> > It would be much better to limit the amount of reclaim done,
> > rather than cause excessive pageout activity.
> >
> > When enough memory is free to do compaction for the highest order
> > allocation possible, bail out of the direct page reclaim code.
> >
> > On smaller systems, this may be enough to obtain contiguous
> > free memory areas to satisfy small allocations, continuing our
> > strategy of relying on luck occasionally. On larger systems,
> > relying on luck like that has not been working for years.
> >
>
> It would be nice to see some solid testing results on real-world
> workloads?
That's why I posted it. I suspect my workload
is not nearly as demanding as the workloads many
other people have, and this is the kind of thing
that wants some serious testing.
It might also make sense to carry it in -mm for
two full release cycles before sending it to Linus.
> (patch retained for linux-mm)
>
> > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > index fc62546096f9..8dd15d514761 100644
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> > @@ -2584,20 +2584,17 @@ static bool shrink_zones(struct zonelist
> > *zonelist, struct scan_control *sc)
> > continue; /* Let kswapd
> > poll it */
> >
> > /*
> > - * If we already have plenty of memory
> > free for
> > - * compaction in this zone, don't free any
> > more.
> > - * Even though compaction is invoked for
> > any
> > - * non-zero order, only frequent costly
> > order
> > - * reclamation is disruptive enough to
> > become a
> > - * noticeable problem, like transparent
> > huge
> > - * page allocations.
> > + * For higher order allocations, free
> > enough memory
> > + * to be able to do compaction for the
> > largest possible
> > + * allocation. On smaller systems, this
> > may be enough
> > + * that smaller allocations can skip
> > compaction, if
> > + * enough adjacent pages get freed.
> > */
> > - if (IS_ENABLED(CONFIG_COMPACTION) &&
> > - sc->order > PAGE_ALLOC_COSTLY_ORDER &&
> > + if (IS_ENABLED(CONFIG_COMPACTION) && sc-
> > >order &&
> > zonelist_zone_idx(z) <=
> > requested_highidx &&
> > - compaction_ready(zone, sc->order)) {
> > + compaction_ready(zone, MAX_ORDER)) {
> > sc->compaction_ready = true;
> > - continue;
> > + return true;
> > }
> >
> > /*
--
All Rights Reversed.
Download attachment "signature.asc" of type "application/pgp-signature" (474 bytes)
Powered by blists - more mailing lists