lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <f6b7dd9c-5ace-4816-842f-ac1265c0f9ee@intel.com>
Date: Wed, 6 Nov 2024 17:10:22 -0800
From: Reinette Chatre <reinette.chatre@...el.com>
To: Peter Newman <peternewman@...gle.com>, <fenghua.yu@...el.com>
CC: <babu.moger@....com>, <bp@...en8.de>, <dave.hansen@...ux.intel.com>,
	<eranian@...gle.com>, <hpa@...or.com>, <james.morse@....com>,
	<linux-kernel@...r.kernel.org>, <mingo@...hat.com>, <nert.pinx@...il.com>,
	<tan.shaopeng@...itsu.com>, <tglx@...utronix.de>, <tony.luck@...el.com>,
	<x86@...nel.org>
Subject: Re: [PATCH v2 2/2] x86/resctrl: Don't workqueue local event counter
 reads

Hi Peter,

On 11/6/24 7:43 AM, Peter Newman wrote:
> Performance-conscious users may use threads bound to CPUs within a
> specific monitoring domain to ensure that all bandwidth counters can be
> read efficiently. The hardware counters are only accessible to CPUs
> within the domain, so requests from CPUs outside the domain are
> forwarded to a kernel worker or IPI handler, incurring a substantial
> performance penalty on each read.  Recently, this penalty was observed
> to be paid by local reads as well.
> 
> To support blocking implementations of resctrl_arch_rmid_read(),
> mon_event_read() switched to smp_call_on_cpu() in most cases to read
> event counters using a kernel worker thread. Unlike
> smp_call_function_any(), which optimizes to a local function call when
> the calling CPU is in the target cpumask, smp_call_on_cpu() queues the
> work unconditionally.
> 
> Introduce resctrl_arch_event_read_blocks() to allow the implementation
> to indicate whether reading a particular event counter blocks. Use this
> to limit the usage of smp_call_on_cpu() to only the counters where it is
> actually needed. This reverts to the previous behavior of always using
> smp_call_function_any() for all x86 implementations.
> 
> This is significant when supporting configurations such as a dual-socket
> AMD Zen2, with 32 L3 monitoring domains and 256 RMIDs. To read both MBM
> counters for all groups on all domains requires 32768 (32*256*2) counter
> reads. The resolution of global, per-group MBM data which can be
> provided is therefore sensitive to the cost of each counter read.
> Furthermore, redirecting this much work to IPI handlers or worker
> threads at a regular interval is disruptive to the present workload.
> 
> The test program fastcat, which was introduced in an earlier path, was
> used to simulate the impact of this change on an optimized event
> counter-reading procedure. The goal is to maximize the frequency at
> which MBM counters can be dumped, so the benchmark determines the cost
> of an additional global MBM counter sample.
> 
> The total number of cycles needed to read all local and total MBM

This optimization proposal aims to reduce the number of cycles used by
the code that is eventually called after user space reads from a file.
As you already noted, this causes a regression in this exact optimization area
in one out of the four scenarios tested.

As I understand this is understood to be a slow path and there are many things
that can impact the number of cycles such a query takes.

I wonder if we could perhaps instead change focus from shaving cycles (after
obtaining a mutex ... ) from a slow path to understand what the use case
is so that resctrl could perhaps obtain a better interface that supports the
use case better overall?

> counters for a large number of monitoring groups was collected using the
> perf tool. The average over 100 iterations is given, with a 1-second
> sleep between iterations to better represent the intended use case. The
> test was run bound to the CPUs of a single MBM domain, once targeting
> counters in the local domain and again for counters in a remote domain.

This sounds as though user space is essentially duplicating what the
MBM overflow handler currently does, which is to run a worker in each domain
to collect MBM data every second from every RMID for both MBM events.

* What are the requirements of this use case?
* Is there some benefit to user space in reading individual counters?
* Would it perhaps be acceptable to provide user space with a cached snapshot
  of all the MBM counters in a single query?

User space can use a single read to obtain values of an event for all RMIDs
on a domain without a single IPI if the architectural state from the overflow handler
is exposed. It could also be possible to present data from all domains in that single
read.

One complication I can think of is that data from the different domains may have
been collected up to a second apart. Is this something this use case can tolerate?


For example,
	# cat /sys/fs/resctrl/info/L3_MON/mbm_snapshot/mbm_total_bytes_00
	  <rdtgroup nameA> <MBM total count>
	  <rdtgroup nameB> <MBM total count>
	  ...

	# cat /sys/fs/resctrl/info/L3_MON/mbm_snapshot/mbm_total_bytes_01
	  <rdtgroup nameA> <MBM total count>
	  <rdtgroup nameB> <MBM total count>
	  ...

If the use case cannot tolerate data up to a second old then resctrl could add new files
in info/L3_MON that will take the mutex once and trigger a *single* IPI to a CPU
from each domain and read all events sequentially (which is essentially mbm_overflow()).

For example,
	# cat /sys/fs/resctrl/info/L3_MON/mbm_current/mbm_total_bytes_00
	  <rdtgroup nameA> <MBM total count>
	  <rdtgroup nameB> <MBM total count>
	  ...

	# cat /sys/fs/resctrl/info/L3_MON/mbm_current/mbm_total_bytes_01
	  <rdtgroup nameA> <MBM total count>
	  <rdtgroup nameB> <MBM total count>
	  ...

As I understand an interface like above would be an improvement over
what can be achieved by user space by optimizing queries to existing interface.

> 
> AMD EPYC 7B12 64-Core Processor (250 mon groups)
> 
> Local Domain:   5.72M -> 1.22M (-78.7%)
> Remote Domain:  5.89M -> 5.20M (-11.8%)
> 
> Intel(R) Xeon(R) Platinum 8173M CPU @ 2.00GHz (220 mon groups)
> 
> Local Domain:   3.37M -> 2.52M (-25.4%)
> Remote Domain:  5.16M -> 5.79M (+12.0%)

In my testing I see worse regression when I look at just kernel cycles.
remote domain before this patch : 17,793,035      cycles:k
remote domain after this patch:   28,131,703      cycles:k

> 
> The slowdown for remote domain reads on Intel is worrying, but since
> this change is effectively a revert to old behavior on x86, this
> shouldn't be anything new.

This does not sound right. It sounds as though you are saying
performance regressions from current behavior are acceptable as long
as the poor performance was encountered at some point before current
behavior?

It could perhaps be argued that performance regression on this slow
path are acceptable ... but doing so would contradict this work.

> Also note that the Remote Domain results and the baseline Local Domain
> results only measure cycles in the test program. Because all counter
> reading work was carried out in kernel worker threads or IPI handlers,
> the total system cost of the operation is greater.

This sounds like speculation. This can be measured with a small
change to your command line, for example, by using:
	perf stat -e cycles:u -e cycles:k ...

Reinette

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ