[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1456352276.25322.7.camel@redhat.com>
Date: Wed, 24 Feb 2016 17:17:56 -0500
From: Rik van Riel <riel@...hat.com>
To: David Rientjes <rientjes@...gle.com>
Cc: linux-kernel@...r.kernel.org, hannes@...xchg.org,
akpm@...ux-foundation.org, vbabka@...e.cz, mgorman@...e.de
Subject: Re: [PATCH] mm: limit direct reclaim for higher order allocations
On Wed, 2016-02-24 at 14:15 -0800, David Rientjes wrote:
> On Wed, 24 Feb 2016, Rik van Riel wrote:
>
> > For multi page allocations smaller than PAGE_ALLOC_COSTLY_ORDER,
> > the kernel will do direct reclaim if compaction failed for any
> > reason. This worked fine when Linux systems had 128MB RAM, but
> > on my 24GB system I frequently see higher order allocations
> > free up over 3GB of memory, pushing all kinds of things into
> > swap, and slowing down applications.
> >
>
> Just curious, are these higher order allocations typically done by
> the
> slub allocator or where are they coming from?
These are slab allocator ones, indeed.
The allocations seem to be order 2 and 3, mostly
on behalf of the inode cache and alloc_skb.
> > It would be much better to limit the amount of reclaim done,
> > rather than cause excessive pageout activity.
> >
> > When enough memory is free to do compaction for the highest order
> > allocation possible, bail out of the direct page reclaim code.
> >
> > On smaller systems, this may be enough to obtain contiguous
> > free memory areas to satisfy small allocations, continuing our
> > strategy of relying on luck occasionally. On larger systems,
> > relying on luck like that has not been working for years.
> >
> > Signed-off-by: Rik van Riel <riel@...hat.com>
> > ---
> > mm/vmscan.c | 19 ++++++++-----------
> > 1 file changed, 8 insertions(+), 11 deletions(-)
> >
> > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > index fc62546096f9..8dd15d514761 100644
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> > @@ -2584,20 +2584,17 @@ static bool shrink_zones(struct zonelist
> *zonelist, struct scan_control *sc)
> > continue; /* Let kswapd poll it
> */
> >
> > /*
> > - * If we already have plenty of memory free
> for
> > - * compaction in this zone, don't free any
> more.
> > - * Even though compaction is invoked for any
> > - * non-zero order, only frequent costly order
> > - * reclamation is disruptive enough to become
> a
> > - * noticeable problem, like transparent huge
> > - * page allocations.
> > + * For higher order allocations, free enough
> memory
> > + * to be able to do compaction for the
> largest possible
> > + * allocation. On smaller systems, this may
> be enough
> > + * that smaller allocations can skip
> compaction, if
> > + * enough adjacent pages get freed.
> > */
> > - if (IS_ENABLED(CONFIG_COMPACTION) &&
> > - sc->order > PAGE_ALLOC_COSTLY_ORDER &&
> > + if (IS_ENABLED(CONFIG_COMPACTION) && sc-
> >order &&
> > zonelist_zone_idx(z) <= requested_highidx
> &&
> > - compaction_ready(zone, sc->order)) {
> > + compaction_ready(zone, MAX_ORDER)) {
> > sc->compaction_ready = true;
> > - continue;
> > + return true;
> > }
> >
> > /*
> >
--
All Rights Reversed.
Download attachment "signature.asc" of type "application/pgp-signature" (474 bytes)
Powered by blists - more mailing lists