lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250321155244.00006338@huawei.com>
Date: Fri, 21 Mar 2025 15:52:44 +0000
From: Jonathan Cameron <Jonathan.Cameron@...wei.com>
To: Raghavendra K T <raghavendra.kt@....com>
CC: <AneeshKumar.KizhakeVeetil@....com>, <Hasan.Maruf@....com>,
	<Michael.Day@....com>, <akpm@...ux-foundation.org>, <bharata@....com>,
	<dave.hansen@...el.com>, <david@...hat.com>, <dongjoo.linux.dev@...il.com>,
	<feng.tang@...el.com>, <gourry@...rry.net>, <hannes@...xchg.org>,
	<honggyu.kim@...com>, <hughd@...gle.com>, <jhubbard@...dia.com>,
	<jon.grimm@....com>, <k.shutemov@...il.com>, <kbusch@...a.com>,
	<kmanaouil.dev@...il.com>, <leesuyeon0506@...il.com>, <leillc@...gle.com>,
	<liam.howlett@...cle.com>, <linux-kernel@...r.kernel.org>,
	<linux-mm@...ck.org>, <mgorman@...hsingularity.net>, <mingo@...hat.com>,
	<nadav.amit@...il.com>, <nphamcs@...il.com>, <peterz@...radead.org>,
	<riel@...riel.com>, <rientjes@...gle.com>, <rppt@...nel.org>,
	<santosh.shukla@....com>, <shivankg@....com>, <shy828301@...il.com>,
	<sj@...nel.org>, <vbabka@...e.cz>, <weixugc@...gle.com>,
	<willy@...radead.org>, <ying.huang@...ux.alibaba.com>, <ziy@...dia.com>,
	<dave@...olabs.net>
Subject: Re: [RFC PATCH V1 00/13] mm: slowtier page promotion based on PTE A
 bit

On Wed, 19 Mar 2025 19:30:15 +0000
Raghavendra K T <raghavendra.kt@....com> wrote:

> Introduction:
> =============
> In the current hot page promotion, all the activities including the
> process address space scanning, NUMA hint fault handling and page
> migration is performed in the process context. i.e., scanning overhead is
> borne by applications.
> 
> This is RFC V1 patch series to do (slow tier) CXL page promotion.
> The approach in this patchset assists/addresses the issue by adding PTE
> Accessed bit scanning.
> 
> Scanning is done by a global kernel thread which routinely scans all
> the processes' address spaces and checks for accesses by reading the
> PTE A bit. 
> 
> A separate migration thread migrates/promotes the pages to the toptier
> node based on a simple heuristic that uses toptier scan/access information
> of the mm.
> 
> Additionally based on the feedback for RFC V0 [4], a prctl knob with
> a scalar value is provided to control per task scanning.
> 
> Initial results show promising number on a microbenchmark. Soon
> will get numbers with real benchmarks and findings (tunings). 
> 
> Experiment:
> ============
> Abench microbenchmark,
> - Allocates 8GB/16GB/32GB/64GB of memory on CXL node
> - 64 threads created, and each thread randomly accesses pages in 4K
>   granularity.

So if I'm reading this right, this is a flat distribution and any
estimate of what is hot is noise?

That will put a positive spin on costs of migration as we will
be moving something that isn't really all that hot and so is moderately
unlikely to be accessed whilst migration is going on.  Or is the point that
the rest of the memory is also mapped but not being accessed?

I'm not entirely sure I follow what this is bound by. Is it bandwidth
bound?


> - 512 iterations with a delay of 1 us between two successive iterations.
> 
> SUT: 512 CPU, 2 node 256GB, AMD EPYC.
> 
> 3 runs, command:  abench -m 2 -d 1 -i 512 -s <size>
> 
> Calculates how much time is taken to complete the task, lower is better.
> Expectation is CXL node memory is expected to be migrated as fast as
> possible.

> 
> Base case: 6.14-rc6    w/ numab mode = 2 (hot page promotion is enabled).
> patched case: 6.14-rc6 w/ numab mode = 1 (numa balancing is enabled).
> we expect daemon to do page promotion.
> 
> Result:
> ========
>          base NUMAB2                    patched NUMAB1
>          time in sec  (%stdev)   time in sec  (%stdev)     %gain
>  8GB     134.33       ( 0.19 )        120.52  ( 0.21 )     10.28
> 16GB     292.24       ( 0.60 )        275.97  ( 0.18 )      5.56
> 32GB     585.06       ( 0.24 )        546.49  ( 0.35 )      6.59
> 64GB    1278.98       ( 0.27 )       1205.20  ( 2.29 )      5.76
> 
> Base case: 6.14-rc6    w/ numab mode = 1 (numa balancing is enabled).
> patched case: 6.14-rc6 w/ numab mode = 1 (numa balancing is enabled).
>          base NUMAB1                    patched NUMAB1
>          time in sec  (%stdev)   time in sec  (%stdev)     %gain
>  8GB     186.71       ( 0.99 )        120.52  ( 0.21 )     35.45 
> 16GB     376.09       ( 0.46 )        275.97  ( 0.18 )     26.62 
> 32GB     744.37       ( 0.71 )        546.49  ( 0.35 )     26.58 
> 64GB    1534.49       ( 0.09 )       1205.20  ( 2.29 )     21.45

Nice numbers, but maybe some more details on what they are showing?
At what point in the workload has all the memory migrated to the
fast node or does that never happen?

I'm confused :(

Jonathan



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ