[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3a46edcf-88f8-e4f4-8b15-3c02620308e4@intel.com>
Date: Mon, 9 Oct 2017 13:23:34 -0700
From: Dave Hansen <dave.hansen@...el.com>
To: Aaron Lu <aaron.lu@...el.com>, linux-mm <linux-mm@...ck.org>,
lkml <linux-kernel@...r.kernel.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Andi Kleen <ak@...ux.intel.com>,
Huang Ying <ying.huang@...el.com>,
Tim Chen <tim.c.chen@...ux.intel.com>,
Kemi Wang <kemi.wang@...el.com>
Subject: Re: [PATCH] page_alloc.c: inline __rmqueue()
On 10/08/2017 10:44 PM, Aaron Lu wrote:
> __rmqueue() is called by rmqueue_bulk() and rmqueue() under zone->lock
> and that lock can be heavily contended with memory intensive applications.
What does "memory intensive" mean? I'd probably just say: "The two
__rmqueue() call sites are in very hot page allocator paths."
> Since __rmqueue() is a small function, inline it can save us some time.
> With the will-it-scale/page_fault1/process benchmark, when using nr_cpu
> processes to stress buddy:
Please include a description of the test and a link to the source.
> On a 2 sockets Intel-Skylake machine:
> base %change head
> 77342 +6.3% 82203 will-it-scale.per_process_ops
What's the unit here? That seems ridiculously low for page_fault1.
It's usually in the millions.
> On a 4 sockets Intel-Skylake machine:
> base %change head
> 75746 +4.6% 79248 will-it-scale.per_process_ops
It's probably worth noting the reason that this is _less_ beneficial on
a larger system.
I'd also just put this in text rather than wasting space in tables like
that. It took me a few minutes to figure out what the table was trying
top say. This is one of those places where LKP output is harmful.
Why not just say:
This patch improved the benchmark by 6.3% on a 2-socket system
and 4.6% on a 4-socket system.
> This patch adds inline to __rmqueue().
How much text bloat does this cost?
Powered by blists - more mailing lists