[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <09da6e20b695086558d6cadefbc4830961e6e60b.1722981659.git.babu.moger@amd.com>
Date: Tue, 6 Aug 2024 17:00:53 -0500
From: Babu Moger <babu.moger@....com>
To: <corbet@....net>, <fenghua.yu@...el.com>, <reinette.chatre@...el.com>,
<tglx@...utronix.de>, <mingo@...hat.com>, <bp@...en8.de>,
<dave.hansen@...ux.intel.com>
CC: <x86@...nel.org>, <hpa@...or.com>, <paulmck@...nel.org>,
<rdunlap@...radead.org>, <tj@...nel.org>, <peterz@...radead.org>,
<yanjiewtw@...il.com>, <babu.moger@....com>, <kim.phillips@....com>,
<lukas.bulwahn@...il.com>, <seanjc@...gle.com>, <jmattson@...gle.com>,
<leitao@...ian.org>, <jpoimboe@...nel.org>, <rick.p.edgecombe@...el.com>,
<kirill.shutemov@...ux.intel.com>, <jithu.joseph@...el.com>,
<kai.huang@...el.com>, <kan.liang@...ux.intel.com>,
<daniel.sneddon@...ux.intel.com>, <pbonzini@...hat.com>,
<sandipan.das@....com>, <ilpo.jarvinen@...ux.intel.com>,
<peternewman@...gle.com>, <maciej.wieczor-retman@...el.com>,
<linux-doc@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
<eranian@...gle.com>, <james.morse@....com>
Subject: [PATCH v6 16/22] x86/resctrl: Add the interface to unassign a MBM counter
The ABMC feature provides an option to the user to assign a hardware
counter to an RMID and monitor the bandwidth as long as it is assigned.
The assigned RMID will be tracked by the hardware until the user unassigns
it manually.
Hardware provides only limited number of counters. If the system runs out
of assignable counters, kernel will display an error when a new assignment
is requested. Users need to unassign a already assigned counter to make
space for new assignment.
Provide the interface to unassign the counter ids from the group. Free the
counter if it is not assigned in any of the domains.
The feature details are documented in the APM listed below [1].
[1] AMD64 Architecture Programmer's Manual Volume 2: System Programming
Publication # 24593 Revision 3.41 section 19.3.3.3 Assignable Bandwidth
Monitoring (ABMC).
Link: https://bugzilla.kernel.org/show_bug.cgi?id=206537
Signed-off-by: Babu Moger <babu.moger@....com>
---
v6: Removed mbm_cntr_free from this patch.
Added counter test in all the domains and free if it is not assigned to
any domains.
v5: Few name changes to match cntr_id.
Changed the function names to
rdtgroup_unassign_cntr
More comments on commit log.
v4: Added domain specific unassign feature.
Few name changes.
v3: Removed the static from the prototype of rdtgroup_unassign_abmc.
The function is not called directly from user anymore. These
changes are related to global assignment interface.
v2: No changes.
---
arch/x86/kernel/cpu/resctrl/internal.h | 2 +
arch/x86/kernel/cpu/resctrl/rdtgroup.c | 52 ++++++++++++++++++++++++++
2 files changed, 54 insertions(+)
diff --git a/arch/x86/kernel/cpu/resctrl/internal.h b/arch/x86/kernel/cpu/resctrl/internal.h
index 4e8109dee174..cc832955b787 100644
--- a/arch/x86/kernel/cpu/resctrl/internal.h
+++ b/arch/x86/kernel/cpu/resctrl/internal.h
@@ -689,6 +689,8 @@ int resctrl_arch_assign_cntr(struct rdt_mon_domain *d, enum resctrl_event_id evt
u32 rmid, u32 cntr_id, u32 closid, bool assign);
int rdtgroup_assign_cntr(struct rdtgroup *rdtgrp, enum resctrl_event_id evtid);
int rdtgroup_alloc_cntr(struct rdtgroup *rdtgrp, int index);
+int rdtgroup_unassign_cntr(struct rdtgroup *rdtgrp, enum resctrl_event_id evtid);
+void rdtgroup_free_cntr(struct rdt_resource *r, struct rdtgroup *rdtgrp, int index);
void rdt_staged_configs_clear(void);
bool closid_allocated(unsigned int closid);
int resctrl_find_cleanest_closid(void);
diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
index 1ee91a7293a8..0c2215dbd497 100644
--- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
+++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
@@ -1961,6 +1961,58 @@ int rdtgroup_assign_cntr(struct rdtgroup *rdtgrp, enum resctrl_event_id evtid)
return 0;
}
+static int rdtgroup_mbm_cntr_test(struct rdt_resource *r, u32 cntr_id)
+{
+ struct rdt_mon_domain *d;
+
+ list_for_each_entry(d, &r->mon_domains, hdr.list)
+ if (test_bit(cntr_id, d->mbm_cntr_map))
+ return 1;
+
+ return 0;
+}
+
+/* Free the counter id after the event is unassigned */
+void rdtgroup_free_cntr(struct rdt_resource *r, struct rdtgroup *rdtgrp,
+ int index)
+{
+ /* Update the counter bitmap */
+ if (!rdtgroup_mbm_cntr_test(r, rdtgrp->mon.cntr_id[index])) {
+ mbm_cntr_free(rdtgrp->mon.cntr_id[index]);
+ rdtgrp->mon.cntr_id[index] = MON_CNTR_UNSET;
+ }
+}
+
+/*
+ * Unassign a hardware counter from the group and update all the domains
+ * in the group.
+ */
+int rdtgroup_unassign_cntr(struct rdtgroup *rdtgrp, enum resctrl_event_id evtid)
+{
+ struct rdt_resource *r = &rdt_resources_all[RDT_RESOURCE_L3].r_resctrl;
+ struct rdt_mon_domain *d;
+ int index;
+
+ index = mon_event_config_index_get(evtid);
+ if (index == INVALID_CONFIG_INDEX)
+ return -EINVAL;
+
+ if (rdtgrp->mon.cntr_id[index] != MON_CNTR_UNSET) {
+ list_for_each_entry(d, &r->mon_domains, hdr.list) {
+ resctrl_arch_assign_cntr(d, evtid, rdtgrp->mon.rmid,
+ rdtgrp->mon.cntr_id[index],
+ rdtgrp->closid, false);
+ clear_bit(rdtgrp->mon.cntr_id[index],
+ d->mbm_cntr_map);
+ }
+
+ /* Free the counter at group level */
+ rdtgroup_free_cntr(r, rdtgrp, index);
+ }
+
+ return 0;
+}
+
/* rdtgroup information files for one cache resource. */
static struct rftype res_common_files[] = {
{
--
2.34.1
Powered by blists - more mailing lists