lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20250107210506.3336da0da4332002847c89a3@linux-foundation.org>
Date: Tue, 7 Jan 2025 21:05:06 -0800
From: Andrew Morton <akpm@...ux-foundation.org>
To: Nikhil Dhama <nikhil.dhama@....com>
Cc: Ying Huang <huang.ying.caritas@...il.com>, <linux-mm@...ck.org>,
 <linux-kernel@...r.kernel.org>, Bharata B Rao <bharata@....com>,
 Raghavendra <raghavendra.kodsarathimmappa@....com>
Subject: Re: [FIX PATCH] mm: pcp: fix pcp->free_count reduction on page
 allocation

On Tue, 7 Jan 2025 14:47:24 +0530 Nikhil Dhama <nikhil.dhama@....com> wrote:

> In current PCP auto-tuning desgin, free_count was introduced to track
> the consecutive page freeing with a counter, This counter is incremented
> by the exact amount of pages that are freed, but reduced by half on
> allocation. This is causing a 2-node iperf3 client to server's network
> bandwidth to drop by 30% if we scale number of client-server pairs from 32
> (where we achieved peak network bandwidth) to 64.
> 
> To fix this issue, on allocation, reduce free_count by the exact number
> of pages that are allocated instead of halving it.

The present division by two appears to be somewhat randomly chosen. 
And as far as I can tell, this patch proposes replacing that with
another somewhat random adjustment.

What's the actual design here?  What are we attempting to do and why,
and why is the proposed design superior to the present one?

> On a 2-node AMD server, one running iperf3 clients and other iperf3
> sever, This patch restores the performance drop.

Nice, but might other workloads on other machines get slower?



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ