lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87wmfsi47b.fsf@DESKTOP-5N7EMDA>
Date: Sun, 22 Dec 2024 15:09:44 +0800
From: "Huang, Ying" <ying.huang@...ux.alibaba.com>
To: Gregory Price <gourry@...rry.net>
Cc: linux-mm@...ck.org,  linux-kernel@...r.kernel.org,
  nehagholkar@...a.com,  abhishekd@...a.com,  kernel-team@...a.com,
  david@...hat.com,  nphamcs@...il.com,  akpm@...ux-foundation.org,
  hannes@...xchg.org,  kbusch@...a.com
Subject: Re: [RFC v2 PATCH 0/5] Promotion of Unmapped Page Cache Folios.

Gregory Price <gourry@...rry.net> writes:

> On Sat, Dec 21, 2024 at 01:18:04PM +0800, Huang, Ying wrote:
>> Gregory Price <gourry@...rry.net> writes:
>> 
>> >
>> > Single-reader DRAM: ~16.0-16.4s
>> > Single-reader CXL (after demotion):  ~16.8-17s
>> 
>> The difference is trivial.  This makes me thought that why we need this
>> patchset?
>>
>
> That's 3-6% performance in this contrived case.

This is small too.

> We're working to testing a real workload we know suffers from this
> problem as it is long-running. Should be early in the new year hopefully.

Good!

To demonstrate the max possible performance gain.  We can use a pure
file read/write benchmark such as fio, run in on pure DRAM and pure CXL.
Then the difference is the max possible performance gain we can get.

>> > Next we turned promotion on with only a single reader running.
>> >
>> > Before promotions:
>> >     Node 0 MemFree:        636478112 kB
>> >     Node 0 FilePages:      59009156 kB
>> >     Node 1 MemFree:        250336004 kB
>> >     Node 1 FilePages:      14979628 kB
>> 
>> Why are there some many file pages on node 1 even if there're a lot of
>> free pages on node 0?  You moved some file pages from node 0 to node 1?
>> 
>
> This was explicit and explained in the test notes:
>
>   First we ran with promotion disabled to show consistent overhead as
>   a result of forcing a file out to CXL memory. We first ran a single
>   reader to see uncontended performance, launched many readers to force
>   demotions, then dropped back to a single reader to observe.
>
> The goal here was to simply demonstrate functionality and stability.

Got it.

>> > After promotions:
>> >     Node 0 MemFree:        632267268 kB
>> >     Node 0 FilePages:      72204968 kB
>> >     Node 1 MemFree:        262567056 kB
>> >     Node 1 FilePages:       2918768 kB
>> >
>> > Single-reader (after_promotion): ~16.5s
>
> This represents a 2.5-6% speedup depending on the spread.
>
>> >
>> > numa_migrate_prep: 93 - time(3969867917) count(42576860)
>> > migrate_misplaced_folio_prepare: 491 - time(3433174319) count(6985523)
>> > migrate_misplaced_folio: 1635 - time(11426529980) count(6985523)
>> >
>> > Thoughts on a good throttling heuristic would be appreciated here.
>> 
>> We do have a throttle mechanism already, for example, you can used
>> 
>> $ echo 100 > /proc/sys/kernel/numa_balancing_promote_rate_limit_MBps
>> 
>> to rate limit the promotion throughput under 100 MB/s for each DRAM
>> node.
>>
>
> Can easily piggyback on that, just wasn't sure if overloading it was
> an acceptable idea.

It's the recommended setup in the original PMEM promotion
implementation.  Please check commit c959924b0dc5 ("memory tiering:
adjust hot threshold automatically").

> Although since that promotion rate limit is also
> per-task (as far as I know, will need to read into it a bit more) this
> is probably fine.

It's not per-task.  Please read the code, especially
should_numa_migrate_memory().

---
Best Regards,
Huang, Ying


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ