[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180216160116.GA24395@bombadil.infradead.org>
Date: Fri, 16 Feb 2018 08:01:16 -0800
From: Matthew Wilcox <willy@...radead.org>
To: Christopher Lameter <cl@...ux.com>
Cc: Michal Hocko <mhocko@...nel.org>,
David Rientjes <rientjes@...gle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Jonathan Corbet <corbet@....net>,
Vlastimil Babka <vbabka@...e.cz>, Mel Gorman <mgorman@...e.de>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
linux-doc@...r.kernel.org
Subject: Re: [patch 1/2] mm, page_alloc: extend kernelcore and movablecore
for percent
On Fri, Feb 16, 2018 at 09:44:25AM -0600, Christopher Lameter wrote:
> On Thu, 15 Feb 2018, Matthew Wilcox wrote:
> > What I was proposing was an intermediate page allocator where slab would
> > request 2MB for its own uses all at once, then allocate pages from that to
> > individual slabs, so allocating a kmalloc-32 object and a dentry object
> > would result in 510 pages of memory still being available for any slab
> > that needed it.
>
> Well thats not really going to work since you would be mixing objects of
> different sizes which may present more fragmentation problems within the
> 2M later if they are freed and more objects are allocated.
I don't understand this response. I'm not suggesting mixing objects
of different sizes within the same page. The vast majority of slabs
use order-0 pages, a few use order-1 pages and larger sizes are almost
unheard of. I'm suggesting the slab have it's own private arena of pages
that it uses for allocating pages to slabs; when an entire page comes
free in a slab, it is returned to the arena. When the arena is empty,
slab requests another arena from the page allocator.
If you're concerned about order-0 allocations fragmenting the arena
for order-1 slabs, then we could have separate arenas for order-0 and
order-1. But there should be no more fragmentation caused by sticking
within an arena for page allocations than there would be by spreading
slab allocations across all memory.
Powered by blists - more mailing lists