[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <242db225-8ddc-968e-a754-6aaefd1b7da9@intel.com>
Date: Thu, 11 May 2023 14:35:53 -0700
From: Reinette Chatre <reinette.chatre@...el.com>
To: Peter Newman <peternewman@...gle.com>,
Fenghua Yu <fenghua.yu@...el.com>
CC: Babu Moger <babu.moger@....com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>, <x86@...nel.org>,
"H. Peter Anvin" <hpa@...or.com>,
Stephane Eranian <eranian@...gle.com>,
James Morse <james.morse@....com>,
<linux-kernel@...r.kernel.org>, <linux-kselftest@...r.kernel.org>
Subject: Re: [PATCH v1 2/9] x86/resctrl: Hold a spinlock in __rmid_read() on
AMD
Hi Peter,
On 4/21/2023 7:17 AM, Peter Newman wrote:
> From: Stephane Eranian <eranian@...gle.com>
>
> In AMD PQoS Versions 1.0 and 2.0, IA32_QM_EVTSEL MSR is shared by all
> processors in a QOS domain. So there's a chance it can read a different
> event when two processors are reading the counter concurrently. Add a
> spinlock to prevent this race.
This is unclear to me. As I understand it this changelog is written as
though there is a race that is being fixed. I believe that rdtgroup_mutex
currently protects against such races. I thus at first thought that
this is a prep patch for the introduction of the new soft RMID feature,
but instead this new spinlock is used independent of the soft RMID feature.
I think the spinlock is unnecessary when the soft RMID feature is disabled.
> Co-developed-by: Peter Newman <peternewman@...gle.com>
> Signed-off-by: Peter Newman <peternewman@...gle.com>
> Signed-off-by: Stephane Eranian <eranian@...gle.com>
> ---
> arch/x86/kernel/cpu/resctrl/core.c | 41 ++++++++++++++++++++++++++
> arch/x86/kernel/cpu/resctrl/internal.h | 5 ++++
> arch/x86/kernel/cpu/resctrl/monitor.c | 14 +++++++--
> 3 files changed, 57 insertions(+), 3 deletions(-)
>
...
> diff --git a/arch/x86/kernel/cpu/resctrl/internal.h b/arch/x86/kernel/cpu/resctrl/internal.h
> index 85ceaf9a31ac..02a062558c67 100644
> --- a/arch/x86/kernel/cpu/resctrl/internal.h
> +++ b/arch/x86/kernel/cpu/resctrl/internal.h
> @@ -325,6 +325,7 @@ struct arch_mbm_state {
> * @ctrl_val: array of cache or mem ctrl values (indexed by CLOSID)
> * @arch_mbm_total: arch private state for MBM total bandwidth
> * @arch_mbm_local: arch private state for MBM local bandwidth
> + * @lock: serializes counter reads when QM_EVTSEL MSR is shared per-domain
> *
> * Members of this structure are accessed via helpers that provide abstraction.
> */
> @@ -333,6 +334,7 @@ struct rdt_hw_domain {
> u32 *ctrl_val;
> struct arch_mbm_state *arch_mbm_total;
> struct arch_mbm_state *arch_mbm_local;
> + raw_spinlock_t evtsel_lock;
> };
Please note the difference between the member name in the struct ("evtsel_lock")
and its description ("lock").
>
> static inline struct rdt_hw_domain *resctrl_to_arch_dom(struct rdt_domain *r)
> @@ -428,6 +430,9 @@ extern struct rdt_hw_resource rdt_resources_all[];
> extern struct rdtgroup rdtgroup_default;
> DECLARE_STATIC_KEY_FALSE(rdt_alloc_enable_key);
>
> +/* Serialization required in resctrl_arch_rmid_read(). */
> +DECLARE_STATIC_KEY_FALSE(rmid_read_locked);
> +
> extern struct dentry *debugfs_resctrl;
>
> enum resctrl_res_level {
> diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c
> index 20952419be75..2de8397f91cd 100644
> --- a/arch/x86/kernel/cpu/resctrl/monitor.c
> +++ b/arch/x86/kernel/cpu/resctrl/monitor.c
> @@ -146,10 +146,15 @@ static inline struct rmid_entry *__rmid_entry(u32 rmid)
> return entry;
> }
>
> -static int __rmid_read(u32 rmid, enum resctrl_event_id eventid, u64 *val)
> +static int __rmid_read(struct rdt_hw_domain *hw_dom, u32 rmid,
> + enum resctrl_event_id eventid, u64 *val)
> {
> + unsigned long flags;
> u64 msr_val;
>
> + if (static_branch_likely(&rmid_read_locked))
Why static_branch_likely() as opposed to static_branch_unlikely()?
> + raw_spin_lock_irqsave(&hw_dom->evtsel_lock, flags);
> +
> /*
> * As per the SDM, when IA32_QM_EVTSEL.EvtID (bits 7:0) is configured
> * with a valid event code for supported resource type and the bits
> @@ -161,6 +166,9 @@ static int __rmid_read(u32 rmid, enum resctrl_event_id eventid, u64 *val)
> wrmsr(MSR_IA32_QM_EVTSEL, eventid, rmid);
> rdmsrl(MSR_IA32_QM_CTR, msr_val);
>
> + if (static_branch_likely(&rmid_read_locked))
> + raw_spin_unlock_irqrestore(&hw_dom->evtsel_lock, flags);
> +
If the first "if (static_branch_likely(&rmid_read_locked))" was taken then the second
if branch _has_ to be taken. It should not be optional to release a lock if it was taken. I
think it would be more robust if a single test of the static key decides whether the
spinlock should be used.
> if (msr_val & RMID_VAL_ERROR)
> return -EIO;
> if (msr_val & RMID_VAL_UNAVAIL)
> @@ -200,7 +208,7 @@ void resctrl_arch_reset_rmid(struct rdt_resource *r, struct rdt_domain *d,
> memset(am, 0, sizeof(*am));
>
> /* Record any initial, non-zero count value. */
> - __rmid_read(rmid, eventid, &am->prev_msr);
> + __rmid_read(hw_dom, rmid, eventid, &am->prev_msr);
> }
> }
>
> @@ -241,7 +249,7 @@ int resctrl_arch_rmid_read(struct rdt_resource *r, struct rdt_domain *d,
> if (!cpumask_test_cpu(smp_processor_id(), &d->cpu_mask))
> return -EINVAL;
>
> - ret = __rmid_read(rmid, eventid, &msr_val);
> + ret = __rmid_read(hw_dom, rmid, eventid, &msr_val);
> if (ret)
> return ret;
>
Reinette
Powered by blists - more mailing lists