[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160825100707.GU2693@suse.de>
Date: Thu, 25 Aug 2016 11:07:07 +0100
From: Mel Gorman <mgorman@...e.de>
To: Christoph Lameter <cl@...ux.com>
Cc: Michal Hocko <mhocko@...nel.org>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Aruna Ramakrishna <aruna.ramakrishna@...cle.com>,
linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Mike Kravetz <mike.kravetz@...cle.com>,
Pekka Enberg <penberg@...nel.org>,
David Rientjes <rientjes@...gle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Jiri Slaby <jslaby@...e.cz>
Subject: Re: what is the purpose of SLAB and SLUB (was: Re: [PATCH v3]
mm/slab: Improve performance of gathering slabinfo) stats
On Wed, Aug 24, 2016 at 11:01:43PM -0500, Christoph Lameter wrote:
> On Wed, 24 Aug 2016, Mel Gorman wrote:
> > If/when I get back to the page allocator, the priority would be a bulk
> > API for faster allocs of batches of order-0 pages instead of allocating
> > a large page and splitting.
> >
>
> OMG. Do we really want to continue this? There are billions of Linux
> devices out there that require a reboot at least once a week. This is now
> standard with certain Android phones. In our company we reboot all
> machines every week because fragmentation degrades performance
> significantly. We need to finally face up to it and deal with the issue
> instead of continuing to produce more half ass-ed solutions.
>
Flipping the lid aside, there will always be a need for fast management
of 4K pages. The primary use case is networking that sometimes uses
high-order pages to avoid allocator overhead and amortise DMA setup.
Userspace-mapped pages will always be 4K although fault-around may benefit
from bulk allocating the pages. That is relatively low hanging fruit that
would take a few weeks given a free schedule.
Dirty tracking of pages on a 4K boundary will always be required to avoid IO
multiplier effects that cannot be side-stepped by increasing the fundamental
unit of allocation.
Batching of tree_lock during reclaim for large files and swapping is also
relatively low hanging fruit that also is doable in a week or two.
A high-order per-cpu cache for SLUB to reduce zone->lock contention is
also relatively low hanging fruit with the caveat it makes per_cpu_pages
larger than a cache line.
If you want to rework the VM to use a larger fundamental unit, track
sub-units where required and deal with the internal fragmentation issues
then by all means go ahead and deal with it.
--
Mel Gorman
SUSE Labs
Powered by blists - more mailing lists