[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210128144245.GH3592@techsingularity.net>
Date: Thu, 28 Jan 2021 14:42:45 +0000
From: Mel Gorman <mgorman@...hsingularity.net>
To: Michal Hocko <mhocko@...e.com>
Cc: Vincent Guittot <vincent.guittot@...aro.org>,
Vlastimil Babka <vbabka@...e.cz>,
Christoph Lameter <cl@...ux.com>,
Bharata B Rao <bharata@...ux.ibm.com>,
linux-kernel <linux-kernel@...r.kernel.org>, linux-mm@...ck.org,
David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Andrew Morton <akpm@...ux-foundation.org>, guro@...com,
Shakeel Butt <shakeelb@...gle.com>,
Johannes Weiner <hannes@...xchg.org>,
aneesh.kumar@...ux.ibm.com, Jann Horn <jannh@...gle.com>
Subject: Re: [RFC PATCH v0] mm/slub: Let number of online CPUs determine the
slub page order
On Thu, Jan 28, 2021 at 02:57:10PM +0100, Michal Hocko wrote:
> On Thu 28-01-21 13:45:12, Mel Gorman wrote:
> [...]
> > So mostly this is down to the number of times SLUB calls into the page
> > allocator which only caches order-0 pages on a per-cpu basis. I do have
> > a prototype for a high-order per-cpu allocator but it is very rough --
> > high watermarks stop making sense, code is rough, memory needed for the
> > pcpu structures quadruples etc.
>
> Thanks this is really useful. But it really begs a question whether this
> is a general case or more an exception. And as such maybe we want to
> define high throughput caches which would gain a higher order pages to
> keep pace with allocation and reduce the churn or deploy some other
> techniques to reduce the direct page allocator involvement.
I don't think we want to define "high throughput caches" because it'll
be workload dependant and a game of whack-a-mole. If the "high throughput
cache" is a kmalloc cache for some set of workloads and one of the inode
caches or dcaches for another one, there will be no setting that is
universally good.
--
Mel Gorman
SUSE Labs
Powered by blists - more mailing lists