[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20161207173507.abvj3tp3vh6es3yz@techsingularity.net>
Date: Wed, 7 Dec 2016 17:35:07 +0000
From: Mel Gorman <mgorman@...hsingularity.net>
To: Christoph Lameter <cl@...ux.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Michal Hocko <mhocko@...e.com>,
Vlastimil Babka <vbabka@...e.cz>,
Johannes Weiner <hannes@...xchg.org>,
Jesper Dangaard Brouer <brouer@...hat.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Linux-MM <linux-mm@...ck.org>,
Linux-Kernel <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] mm: page_alloc: High-order per-cpu page allocator v7
On Wed, Dec 07, 2016 at 11:11:08AM -0600, Christoph Lameter wrote:
> On Wed, 7 Dec 2016, Mel Gorman wrote:
>
> > 3.0-era kernels had better fragmentation control, higher success rates at
> > allocation etc. I vaguely recall that it had fewer sources of high-order
> > allocations but I don't remember specifics and part of that could be the
> > lack of THP at the time. The overhead was massive due to massive stalls
> > and excessive reclaim -- hours to complete some high-allocation stress
> > tests even if the success rate was high.
>
> There were a couple of high order page reclaim improvements implemented
> at that time that were later abandoned. I think higher order pages were
> more available than now.
There were, the cost was high -- lumpy reclaim was a major source of the
cost but not the only one. The cost of allocation offset any benefit of
having them. At least for hugepages it did, I don't know about SLUB because
I didn't quantify if the benefit of SLUB using huge pages was offset by
the allocation cost (I doubt it). The cost later became intolerable when
THP started hitting those paths routinely.
It's not simply a case of going back to how fragmentation control was
managed then because it'll simply reintroduce excessive stalls in
allocation paths.
--
Mel Gorman
SUSE Labs
Powered by blists - more mailing lists