[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230421141723.2405942-9-peternewman@google.com>
Date: Fri, 21 Apr 2023 16:17:22 +0200
From: Peter Newman <peternewman@...gle.com>
To: Fenghua Yu <fenghua.yu@...el.com>,
Reinette Chatre <reinette.chatre@...el.com>
Cc: Babu Moger <babu.moger@....com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>, x86@...nel.org,
"H. Peter Anvin" <hpa@...or.com>,
Stephane Eranian <eranian@...gle.com>,
James Morse <james.morse@....com>,
linux-kernel@...r.kernel.org, linux-kselftest@...r.kernel.org,
Peter Newman <peternewman@...gle.com>
Subject: [PATCH v1 8/9] x86/resctrl: Use mbm_update() to push soft RMID counts
__mon_event_count() only reads the current software count and does not
cause CPUs in the domain to flush. For mbm_update() to be effective in
preventing overflow in hardware counters with soft RMIDs, it needs to
flush the domain CPUs so that all of the HW RMIDs are read.
When RMIDs are soft, mbm_update() is intended to push bandwidth counts
to the software counters rather than pulling the counts from hardware
when userspace reads event counts, as this is a lot more efficient when
the number of HW RMIDs is fixed.
When RMIDs are soft, mbm_update() only calls mbm_flush_cpu_handler() on
each CPU in the domain rather than reading all RMIDs.
Signed-off-by: Peter Newman <peternewman@...gle.com>
---
arch/x86/kernel/cpu/resctrl/monitor.c | 28 +++++++++++++++++++++++----
1 file changed, 24 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c
index 3d54a634471a..9575cb79b8ee 100644
--- a/arch/x86/kernel/cpu/resctrl/monitor.c
+++ b/arch/x86/kernel/cpu/resctrl/monitor.c
@@ -487,6 +487,11 @@ void resctrl_mbm_flush_cpu(void)
__mbm_flush(QOS_L3_MBM_TOTAL_EVENT_ID, r, d);
}
+static void mbm_flush_cpu_handler(void *p)
+{
+ resctrl_mbm_flush_cpu();
+}
+
static int __mon_event_count_soft_rmid(u32 rmid, struct rmid_read *rr)
{
struct mbm_state *m;
@@ -806,12 +811,27 @@ void mbm_handle_overflow(struct work_struct *work)
r = &rdt_resources_all[RDT_RESOURCE_L3].r_resctrl;
d = container_of(work, struct rdt_domain, mbm_over.work);
+ if (rdt_mon_soft_rmid) {
+ /*
+ * HW RMIDs are permanently assigned to CPUs, so only a per-CPU
+ * flush is needed.
+ */
+ on_each_cpu_mask(&d->cpu_mask, mbm_flush_cpu_handler, NULL,
+ false);
+ }
+
list_for_each_entry(prgrp, &rdt_all_groups, rdtgroup_list) {
- mbm_update(r, d, prgrp->mon.rmid);
+ /*
+ * mbm_update() on every RMID would result in excessive IPIs
+ * when RMIDs are soft.
+ */
+ if (!rdt_mon_soft_rmid) {
+ mbm_update(r, d, prgrp->mon.rmid);
- head = &prgrp->mon.crdtgrp_list;
- list_for_each_entry(crgrp, head, mon.crdtgrp_list)
- mbm_update(r, d, crgrp->mon.rmid);
+ head = &prgrp->mon.crdtgrp_list;
+ list_for_each_entry(crgrp, head, mon.crdtgrp_list)
+ mbm_update(r, d, crgrp->mon.rmid);
+ }
if (is_mba_sc(NULL))
update_mba_bw(prgrp, d);
--
2.40.0.634.g4ca3ef3211-goog
Powered by blists - more mailing lists