[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1456368447.25322.23.camel@redhat.com>
Date: Wed, 24 Feb 2016 21:47:27 -0500
From: Rik van Riel <riel@...hat.com>
To: Joonsoo Kim <iamjoonsoo.kim@....com>
Cc: David Rientjes <rientjes@...gle.com>, linux-kernel@...r.kernel.org,
hannes@...xchg.org, akpm@...ux-foundation.org, vbabka@...e.cz,
mgorman@...e.de
Subject: Re: [PATCH] mm: limit direct reclaim for higher order allocations
On Thu, 2016-02-25 at 09:30 +0900, Joonsoo Kim wrote:
> On Wed, Feb 24, 2016 at 05:17:56PM -0500, Rik van Riel wrote:
> > On Wed, 2016-02-24 at 14:15 -0800, David Rientjes wrote:
> > > On Wed, 24 Feb 2016, Rik van Riel wrote:
> > >
> > > > For multi page allocations smaller than
> > > > PAGE_ALLOC_COSTLY_ORDER,
> > > > the kernel will do direct reclaim if compaction failed for any
> > > > reason. This worked fine when Linux systems had 128MB RAM, but
> > > > on my 24GB system I frequently see higher order allocations
> > > > free up over 3GB of memory, pushing all kinds of things into
> > > > swap, and slowing down applications.
> > > >
> > >
> > > Just curious, are these higher order allocations typically done
> > > by
> > > the
> > > slub allocator or where are they coming from?
> >
> > These are slab allocator ones, indeed.
> >
> > The allocations seem to be order 2 and 3, mostly
> > on behalf of the inode cache and alloc_skb.
>
> Hello, Rik.
>
> Could you tell me the kernel version you tested?
>
> Commit 45eb00cd3a03 (mm/slub: don't wait for high-order page
> allocation) changes slub allocator's behaviour that high order
> allocation request by slub doesn't cause direct reclaim.
The system I observed the problem on has a
4.2 based kernel on it. That would explain.
Are we sure the problem is limited just to
slub, though?
--
All Rights Reversed.
Download attachment "signature.asc" of type "application/pgp-signature" (474 bytes)
Powered by blists - more mailing lists