lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z2bVWWuGe0aiv-t_@gourry-fedora-PF4VCD3F>
Date: Sat, 21 Dec 2024 09:48:57 -0500
From: Gregory Price <gourry@...rry.net>
To: "Huang, Ying" <ying.huang@...ux.alibaba.com>
Cc: Gregory Price <gourry@...rry.net>, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org, nehagholkar@...a.com,
	abhishekd@...a.com, kernel-team@...a.com, david@...hat.com,
	nphamcs@...il.com, akpm@...ux-foundation.org, hannes@...xchg.org,
	kbusch@...a.com
Subject: Re: [RFC v2 PATCH 0/5] Promotion of Unmapped Page Cache Folios.

On Sat, Dec 21, 2024 at 01:18:04PM +0800, Huang, Ying wrote:
> Gregory Price <gourry@...rry.net> writes:
> 
> >
> > Single-reader DRAM: ~16.0-16.4s
> > Single-reader CXL (after demotion):  ~16.8-17s
> 
> The difference is trivial.  This makes me thought that why we need this
> patchset?
>

That's 3-6% performance in this contrived case.

We're working to testing a real workload we know suffers from this
problem as it is long-running. Should be early in the new year hopefully.

> > Next we turned promotion on with only a single reader running.
> >
> > Before promotions:
> >     Node 0 MemFree:        636478112 kB
> >     Node 0 FilePages:      59009156 kB
> >     Node 1 MemFree:        250336004 kB
> >     Node 1 FilePages:      14979628 kB
> 
> Why are there some many file pages on node 1 even if there're a lot of
> free pages on node 0?  You moved some file pages from node 0 to node 1?
> 

This was explicit and explained in the test notes:

  First we ran with promotion disabled to show consistent overhead as
  a result of forcing a file out to CXL memory. We first ran a single
  reader to see uncontended performance, launched many readers to force
  demotions, then dropped back to a single reader to observe.

The goal here was to simply demonstrate functionality and stability.

> > After promotions:
> >     Node 0 MemFree:        632267268 kB
> >     Node 0 FilePages:      72204968 kB
> >     Node 1 MemFree:        262567056 kB
> >     Node 1 FilePages:       2918768 kB
> >
> > Single-reader (after_promotion): ~16.5s

This represents a 2.5-6% speedup depending on the spread.

> >
> > numa_migrate_prep: 93 - time(3969867917) count(42576860)
> > migrate_misplaced_folio_prepare: 491 - time(3433174319) count(6985523)
> > migrate_misplaced_folio: 1635 - time(11426529980) count(6985523)
> >
> > Thoughts on a good throttling heuristic would be appreciated here.
> 
> We do have a throttle mechanism already, for example, you can used
> 
> $ echo 100 > /proc/sys/kernel/numa_balancing_promote_rate_limit_MBps
> 
> to rate limit the promotion throughput under 100 MB/s for each DRAM
> node.
>

Can easily piggyback on that, just wasn't sure if overloading it was
an acceptable idea.  Although since that promotion rate limit is also
per-task (as far as I know, will need to read into it a bit more) this
is probably fine.

~Gregory

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ