[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250212050403.17504-1-nikhil.dhama@amd.com>
Date: Wed, 12 Feb 2025 10:34:03 +0530
From: Nikhil Dhama <nikhil.dhama@....com>
To: <akpm@...ux-foundation.org>
CC: <bharata@....com>, <huang.ying.caritas@...il.com>,
<linux-kernel@...r.kernel.org>, <linux-mm@...ck.org>,
<mgorman@...hsingularity.net>, <nikhil.dhama@....com>,
<raghavendra.kodsarathimmappa@....com>, <ying.huang@...ux.alibaba.com>
Subject: Re: [FIX PATCH] mm: pcp: fix pcp->free_count reduction on page allocation
On 1/29/2025 10:01 AM, Andrew Morton wrote:
>
> On Wed, 15 Jan 2025 19:19:02 +0800 "Huang, Ying" <ying.huang@...ux.alibaba.com> wrote:
>
>> Andrew Morton <akpm@...ux-foundation.org> writes:
>>
>>> On Tue, 7 Jan 2025 14:47:24 +0530 Nikhil Dhama <nikhil.dhama@....com> wrote:
>>>
>>>> In current PCP auto-tuning desgin, free_count was introduced to track
>>>> the consecutive page freeing with a counter, This counter is incremented
>>>> by the exact amount of pages that are freed, but reduced by half on
>>>> allocation. This is causing a 2-node iperf3 client to server's network
>>>> bandwidth to drop by 30% if we scale number of client-server pairs from 32
>>>> (where we achieved peak network bandwidth) to 64.
>>>>
>>>> To fix this issue, on allocation, reduce free_count by the exact number
>>>> of pages that are allocated instead of halving it.
>>> The present division by two appears to be somewhat randomly chosen.
>>> And as far as I can tell, this patch proposes replacing that with
>>> another somewhat random adjustment.
>>>
>>> What's the actual design here? What are we attempting to do and why,
>>> and why is the proposed design superior to the present one?
>> Cc Mel for the original design.
>>
>> IIUC, pcp->free_count is used to identify the consecutive, pure, large
>> number of page freeing pattern. For that pattern, larger batch will be
>> used to free pages from PCP to buddy to improve the performance. Mixed
>> free/allocation pattern should not make pcp->free_count large, even if
>> the number of the pages freed is much larger than that of the pages
>> allocated in the long run. So, pcp->free_count decreases rapidly for
>> the page allocation.
>>
>> Hi, Mel, please correct me if my understanding isn't correct.
>>
> hm, no Mel.
>
> Nikhil, please do continue to work on this - it seems that there will
> be a significant benefit to retuning this.
Hi Andrew,
I have analyzed the performance of different memory-sensitive workloads for these
two different ways to decrement pcp->free_count. I compared the score amongst
v6.6 mainline, v6.7 mainline and v6.7 with our patch.
For all the benchmarks, I used a 2-socket AMD server with 382 logical CPUs.
Results I got are as follows:
All scores are normalized with respect to v6.6 (base).
For all the benchmarks below (iperf3, lmbench3 unix, netperf, redis, gups, xsbench),
a higher score is better.
iperf3 lmbench3 Unix 1-node netperf 2-node netperf
(AF_UNIX) (SCTP_STREAM_MANY) (SCTP_STREAM_MANY)
------- -------------- ------------------ ------------------
v6.6 (base) 100 100 100 100
v6.7 69 113.2 99 98.59
v6.7 with my patch 100 112.1 100.3 101.16
redis standard redis core redis L3 Heavy Gups xsbench
-------------- ---------- -------------- ---- -------
v6.6 (base) 100 100 100 100 100
v6.7 99.45 101.66 99.47 100 98.14
v6.7 with my patch 99.76 101.12 99.75 100 99.56
and for graph500, hashjoin, pagerank and Kbuild, a lower score is better.
graph500 hashjoin hashjoin pagerank Kbuild
(THP always) (THP never)
--------- ------------ ----------- -------- ------
v6.6 (base) 100 100 100 100 100
v6.7 101.08 101.3 101.9 100 98.8
v6.7 with my patch 99.73 100 101.66 100 99.6
from these result I can conclude that this patch is performing better
or as good as base v6.7 on almost all of these workloads.
Powered by blists - more mailing lists