[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d240b37c-e00e-4b6f-a0c2-267041aaf9de@intel.com>
Date: Thu, 17 Jul 2025 20:51:11 -0700
From: Reinette Chatre <reinette.chatre@...el.com>
To: Babu Moger <babu.moger@....com>, <corbet@....net>, <tony.luck@...el.com>,
<james.morse@....com>, <tglx@...utronix.de>, <mingo@...hat.com>,
<bp@...en8.de>, <dave.hansen@...ux.intel.com>
CC: <Dave.Martin@....com>, <x86@...nel.org>, <hpa@...or.com>,
<akpm@...ux-foundation.org>, <paulmck@...nel.org>, <rostedt@...dmis.org>,
<Neeraj.Upadhyay@....com>, <david@...hat.com>, <arnd@...db.de>,
<fvdl@...gle.com>, <seanjc@...gle.com>, <jpoimboe@...nel.org>,
<pawan.kumar.gupta@...ux.intel.com>, <xin@...or.com>,
<manali.shukla@....com>, <tao1.su@...ux.intel.com>, <sohil.mehta@...el.com>,
<kai.huang@...el.com>, <xiaoyao.li@...el.com>, <peterz@...radead.org>,
<xin3.li@...el.com>, <kan.liang@...ux.intel.com>,
<mario.limonciello@....com>, <thomas.lendacky@....com>, <perry.yuan@....com>,
<gautham.shenoy@....com>, <chang.seok.bae@...el.com>,
<linux-doc@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
<peternewman@...gle.com>, <eranian@...gle.com>
Subject: Re: [PATCH v15 21/34] x86/resctrl: Refactor resctrl_arch_rmid_read()
Hi Babu,
On 7/8/25 3:17 PM, Babu Moger wrote:
> resctrl_arch_rmid_read() modifies the value obtained from MSR_IA32_QM_CTR
> to account for overflow. This adjustment is common to both
The portion factored out does not just handle MBM overflow counts but also
handles counter scaling for *all* events, including cache occupancy.
> resctrl_arch_rmid_read() and resctrl_arch_cntr_read().
>
> To prepare for the implementation of resctrl_arch_cntr_read(), refactor
> this logic into a new function called mbm_corrected_val().
This thus cannot be made specific to MBM. More accurate may be
get_corrected_val().
>
> Signed-off-by: Babu Moger <babu.moger@....com>
> ---
> v15: New patch to add arch calls resctrl_arch_cntr_read() and resctrl_arch_reset_cntr()
> with mbm_event mode.
> https://lore.kernel.org/lkml/b4b14670-9cb0-4f65-abd5-39db996e8da9@intel.com/
> ---
> arch/x86/kernel/cpu/resctrl/monitor.c | 38 ++++++++++++++++-----------
> 1 file changed, 23 insertions(+), 15 deletions(-)
>
> diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c
> index 017f3b8e28f9..a230d98e9d73 100644
> --- a/arch/x86/kernel/cpu/resctrl/monitor.c
> +++ b/arch/x86/kernel/cpu/resctrl/monitor.c
> @@ -217,15 +217,33 @@ static u64 mbm_overflow_count(u64 prev_msr, u64 cur_msr, unsigned int width)
> return chunks >> shift;
> }
>
> +static u64 mbm_corrected_val(struct rdt_resource *r, struct rdt_mon_domain *d,
> + u32 rmid, enum resctrl_event_id eventid, u64 msr_val)
> +{
> + struct rdt_hw_mon_domain *hw_dom = resctrl_to_arch_mon_dom(d);
> + struct rdt_hw_resource *hw_res = resctrl_to_arch_res(r);
> + struct arch_mbm_state *am;
> + u64 chunks;
> +
> + am = get_arch_mbm_state(hw_dom, rmid, eventid);
> + if (am) {
These are MBM counter adjustments.
> + am->chunks += mbm_overflow_count(am->prev_msr, msr_val,
> + hw_res->mbm_width);
Above can be aligned to open parentheses.
> + chunks = get_corrected_mbm_count(rmid, am->chunks);
> + am->prev_msr = msr_val;
> + } else {
Cache occupancy handled here.
> + chunks = msr_val;
> + }
> +
Both MBM and cache occupancy scaled below:
> + return chunks * hw_res->mon_scale;
> +}
> +
> int resctrl_arch_rmid_read(struct rdt_resource *r, struct rdt_mon_domain *d,
> u32 unused, u32 rmid, enum resctrl_event_id eventid,
> u64 *val, void *ignored)
> {
> - struct rdt_hw_mon_domain *hw_dom = resctrl_to_arch_mon_dom(d);
> - struct rdt_hw_resource *hw_res = resctrl_to_arch_res(r);
> int cpu = cpumask_any(&d->hdr.cpu_mask);
> - struct arch_mbm_state *am;
> - u64 msr_val, chunks;
> + u64 msr_val;
> u32 prmid;
> int ret;
>
> @@ -236,17 +254,7 @@ int resctrl_arch_rmid_read(struct rdt_resource *r, struct rdt_mon_domain *d,
> if (ret)
> return ret;
>
> - am = get_arch_mbm_state(hw_dom, rmid, eventid);
> - if (am) {
> - am->chunks += mbm_overflow_count(am->prev_msr, msr_val,
> - hw_res->mbm_width);
> - chunks = get_corrected_mbm_count(rmid, am->chunks);
> - am->prev_msr = msr_val;
> - } else {
> - chunks = msr_val;
> - }
> -
> - *val = chunks * hw_res->mon_scale;
> + *val = mbm_corrected_val(r, d, rmid, eventid, msr_val);
>
> return 0;
> }
Reinette
Powered by blists - more mailing lists