[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20250319081432.18130-1-nikhil.dhama@amd.com>
Date: Wed, 19 Mar 2025 13:44:32 +0530
From: Nikhil Dhama <nikhil.dhama@....com>
To: <akpm@...ux-foundation.org>, <ying.huang@...ux.alibaba.com>
CC: Nikhil Dhama <nikhil.dhama@....com>, Ying Huang
<huang.ying.caritas@...il.com>, <linux-mm@...ck.org>,
<linux-kernel@...r.kernel.org>, Bharata B Rao <bharata@....com>, Raghavendra
<raghavendra.kodsarathimmappa@....com>
Subject: [PATCH -V2] mm: pcp: scale batch to reduce number of high order pcp flushes on deallocation
On 2/12/2025 2:10 PM, Huang, Ying <ying.huang@...ux.alibaba.com> wrote:
>
> Nikhil Dhama <nikhil.dhama@....com> writes:
>
>> On 1/29/2025 10:01 AM, Andrew Morton wrote:
>>> On Wed, 15 Jan 2025 19:19:02 +0800 "Huang, Ying" <ying.huang@...ux.alibaba.com> wrote:
>>>
>>>> Andrew Morton <akpm@...ux-foundation.org> writes:
>>>>
>>>>> On Tue, 7 Jan 2025 14:47:24 +0530 Nikhil Dhama <nikhil.dhama@....com> wrote:
>>>>>
>>>>>> In current PCP auto-tuning desgin, free_count was introduced to track
>>>>>> the consecutive page freeing with a counter, This counter is incremented
>>>>>> by the exact amount of pages that are freed, but reduced by half on
>>>>>> allocation. This is causing a 2-node iperf3 client to server's network
>>>>>> bandwidth to drop by 30% if we scale number of client-server pairs from 32
>>>>>> (where we achieved peak network bandwidth) to 64.
>>>>>>
>>>>>> To fix this issue, on allocation, reduce free_count by the exact number
>>>>>> of pages that are allocated instead of halving it.
>>>>> The present division by two appears to be somewhat randomly chosen.
>>>>> And as far as I can tell, this patch proposes replacing that with
>>>>> another somewhat random adjustment.
>>>>>
>>>>> What's the actual design here? What are we attempting to do and why,
>>>>> and why is the proposed design superior to the present one?
>>>> Cc Mel for the original design.
>>>>
>>>> IIUC, pcp->free_count is used to identify the consecutive, pure, large
>>>> number of page freeing pattern. For that pattern, larger batch will be
>>>> used to free pages from PCP to buddy to improve the performance. Mixed
>>>> free/allocation pattern should not make pcp->free_count large, even if
>>>> the number of the pages freed is much larger than that of the pages
>>>> allocated in the long run. So, pcp->free_count decreases rapidly for
>>>> the page allocation.
>>>>
>>>> Hi, Mel, please correct me if my understanding isn't correct.
>>>>
>>> hm, no Mel.
>>>
>>> Nikhil, please do continue to work on this - it seems that there will
>>> be a significant benefit to retuning this.
>>
>> Hi Andrew,
>>
>> I have analyzed the performance of different memory-sensitive workloads for these
>> two different ways to decrement pcp->free_count. I compared the score amongst
>> v6.6 mainline, v6.7 mainline and v6.7 with our patch.
>>
>> For all the benchmarks, I used a 2-socket AMD server with 382 logical CPUs.
>>
>> Results I got are as follows:
>> All scores are normalized with respect to v6.6 (base).
>>
>>
>> For all the benchmarks below (iperf3, lmbench3 unix, netperf, redis, gups, xsbench),
>> a higher score is better.
>>
>> iperf3 lmbench3 Unix 1-node netperf 2-node netperf
>> (AF_UNIX) (SCTP_STREAM_MANY) (SCTP_STREAM_MANY)
>> ------- -------------- ------------------ ------------------
>> v6.6 (base) 100 100 100 100
>> v6.7 69 113.2 99 98.59
>> v6.7 with my patch 100 112.1 100.3 101.16
>>
>>
>> redis standard redis core redis L3 Heavy Gups xsbench
>> -------------- ---------- -------------- ---- -------
>> v6.6 (base) 100 100 100 100 100
>> v6.7 99.45 101.66 99.47 100 98.14
>> v6.7 with my patch 99.76 101.12 99.75 100 99.56
>>
>>
>> and for graph500, hashjoin, pagerank and Kbuild, a lower score is better.
>>
>> graph500 hashjoin hashjoin pagerank Kbuild
>> (THP always) (THP never)
>> --------- ------------ ----------- -------- ------
>> v6.6 (base) 100 100 100 100 100
>> v6.7 101.08 101.3 101.9 100 98.8
>> v6.7 with my patch 99.73 100 101.66 100 99.6
>>
>> from these result I can conclude that this patch is performing better
>> or as good as base v6.7 on almost all of these workloads.
> Sorry, this change doesn't make sense to me.
>
> For example, if a large size process exits on a CPU, pcp->free_count
> will increase on this CPU. This is good, because the process can free
> pages quicker during exiting with the larger batching. However, after
> that, pcp->free_count may be kept large for a long duration unless a
> large number of page allocation (without large number of page freeing)
> are done on the CPU. So, the page freeing parameter may be influenced
> by some unrelated workload for long time. That doesn't sound good.
>
> In effect, the larger pcp->free_count will increase page freeing batch
> size. That will improve the page freeing throughput but hurt page
> freeing latency. Please check the page freeing latency too. If larger
> batch number helps performance without regressions, just increase batch
> number directly instead of playing with pcp->free_count.
> And, do you run network related workloads on one machine? If so, please
> try to run them on two machines instead, with clients and servers run on
> different machines. At least, please use different sockets for clients
> and servers. Because larger pcp->free_count will make it easier to
> trigger free_high heuristics. If that is the case, please try to
> optimize free_high heuristics directly too.
I agree with Ying Huang, the above change is not the best possible fix for
the issue. On futher analysis I figured that root cause of the issue is
the frequent pcp high order flushes. During a 20sec iperf3 run
I observed on avg 5 pcp high order flushes in kernel v6.6, whereas, in
v6.7, I observed about 170 pcp high order flushes.
Tracing pcp->free_count, I figured with the patch v1 (patch I suggested
earlier) free_count is going into negatives which reduces the number of
times free_high heuristics is triggered hence reducing the high order
flushes.
As Ying Huang Suggested, it helps the performance on increasing the batch size
for free_high heuristics. I tried different scaling factors to find best
suitable batch value for free_high heuristics,
score # free_high
----------- ----- -----------
v6.6 (base) 100 4
v6.12 (batch*1) 69 170
batch*2 69 150
batch*4 74 101
batch*5 100 53
batch*6 100 36
batch*8 100 3
scaling batch for free_high heuristics with a factor of 5 restores the
performance.
On AMD 2-node machine, score for other benchmarks with patch v2
are as follows:
iperf3 lmbench3 netperf kbuild
(AF_UNIX) (SCTP_STREAM_MANY)
------- --------- ----------------- ------
v6.6 (base) 100 100 100 100
v6.12 69 113 98.5 98.8
v6.12 with patch v2 100 112.5 100.1 99.6
for network workloads, clients and server are running on different
machines conneted via Mellanox Connect-7 NIC.
number of free_high:
iperf3 lmbench3 netperf kbuild
(AF_UNIX) (SCTP_STREAM_MANY)
------- --------- ----------------- ------
v6.6 (base) 5 12 6 2
v6.12 170 11 92 2
v6.12 with patch v2 58 11 34 2
Signed-off-by: Nikhil Dhama <nikhil.dhama@....com>
Cc: Andrew Morton <akpm@...ux-foundation.org>
Cc: Ying Huang <huang.ying.caritas@...il.com>
Cc: linux-mm@...ck.org
Cc: linux-kernel@...r.kernel.org
Cc: Bharata B Rao <bharata@....com>
Cc: Raghavendra <raghavendra.kodsarathimmappa@....com>
---
mm/page_alloc.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index b6958333054d..326d5fbae353 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2617,7 +2617,7 @@ static void free_unref_page_commit(struct zone *zone, struct per_cpu_pages *pcp,
* stops will be drained from vmstat refresh context.
*/
if (order && order <= PAGE_ALLOC_COSTLY_ORDER) {
- free_high = (pcp->free_count >= batch &&
+ free_high = (pcp->free_count >= (batch*5) &&
(pcp->flags & PCPF_PREV_FREE_HIGH_ORDER) &&
(!(pcp->flags & PCPF_FREE_HIGH_BATCH) ||
pcp->count >= READ_ONCE(batch)));
--
2.25.1
Powered by blists - more mailing lists