lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <38b9e6df-cccd-a745-da4a-1d1a0ec86ff3@intel.com>
Date:   Thu, 11 May 2023 14:37:08 -0700
From:   Reinette Chatre <reinette.chatre@...el.com>
To:     Peter Newman <peternewman@...gle.com>,
        Fenghua Yu <fenghua.yu@...el.com>
CC:     Babu Moger <babu.moger@....com>,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
        Dave Hansen <dave.hansen@...ux.intel.com>, <x86@...nel.org>,
        "H. Peter Anvin" <hpa@...or.com>,
        Stephane Eranian <eranian@...gle.com>,
        James Morse <james.morse@....com>,
        <linux-kernel@...r.kernel.org>, <linux-kselftest@...r.kernel.org>
Subject: Re: [PATCH v1 3/9] x86/resctrl: Add resctrl_mbm_flush_cpu() to
 collect CPUs' MBM events

Hi Peter,

On 4/21/2023 7:17 AM, Peter Newman wrote:
> AMD implementations so far are only guaranteed to provide MBM event
> counts for RMIDs which are currently assigned in CPUs' PQR_ASSOC MSRs.
> Hardware can reallocate the counter resources for all other RMIDs' which
> are not currently assigned to those which are, zeroing the event counts
> of the unassigned RMIDs.
> 
> In practice, this makes it impossible to simultaneously calculate the
> memory bandwidth speed of all RMIDs on a busy system where all RMIDs are
> in use. Over a multiple-second measurement window, the RMID would need
> to remain assigned in all of the L3 cache domains where it has been
> assigned for the duration of the measurement, otherwise portions of the
> final count will be zero. In general, it is not possible to bound the
> number of RMIDs which will be assigned in an L3 domain over any interval
> of time.
> 
> To provide reliable MBM counts on such systems, introduce "soft" RMIDs:
> when enabled, each CPU is permanently assigned a hardware RMID whose
> event counts are flushed to the current soft RMID during context
> switches which result in a change in soft RMID as well as whenever
> userspace requests the current event count for a domain.
> 
> Implement resctrl_mbm_flush_cpu(), which collects a domain's current MBM
> event counts into its current software RMID. The delta for each CPU is
> determined by tracking the previous event counts in per-CPU data.  The
> software byte counts reside in the arch-independent mbm_state
> structures.

Could you elaborate why the arch-independent mbm_state was chosen? 

> 
> Co-developed-by: Stephane Eranian <eranian@...gle.com>
> Signed-off-by: Stephane Eranian <eranian@...gle.com>
> Signed-off-by: Peter Newman <peternewman@...gle.com>
> ---
>  arch/x86/include/asm/resctrl.h         |  2 +
>  arch/x86/kernel/cpu/resctrl/internal.h | 10 ++--
>  arch/x86/kernel/cpu/resctrl/monitor.c  | 78 ++++++++++++++++++++++++++
>  3 files changed, 86 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/x86/include/asm/resctrl.h b/arch/x86/include/asm/resctrl.h
> index 255a78d9d906..e7acf118d770 100644
> --- a/arch/x86/include/asm/resctrl.h
> +++ b/arch/x86/include/asm/resctrl.h
> @@ -13,6 +13,7 @@
>   * @cur_closid:	The cached Class Of Service ID
>   * @default_rmid:	The user assigned Resource Monitoring ID
>   * @default_closid:	The user assigned cached Class Of Service ID
> + * @hw_rmid:	The permanently-assigned RMID when soft RMIDs are in use
>   *
>   * The upper 32 bits of MSR_IA32_PQR_ASSOC contain closid and the
>   * lower 10 bits rmid. The update to MSR_IA32_PQR_ASSOC always
> @@ -27,6 +28,7 @@ struct resctrl_pqr_state {
>  	u32			cur_closid;
>  	u32			default_rmid;
>  	u32			default_closid;
> +	u32			hw_rmid;
>  };
>  
>  DECLARE_PER_CPU(struct resctrl_pqr_state, pqr_state);
> diff --git a/arch/x86/kernel/cpu/resctrl/internal.h b/arch/x86/kernel/cpu/resctrl/internal.h
> index 02a062558c67..256eee05d447 100644
> --- a/arch/x86/kernel/cpu/resctrl/internal.h
> +++ b/arch/x86/kernel/cpu/resctrl/internal.h
> @@ -298,12 +298,14 @@ struct rftype {
>   * @prev_bw:	The most recent bandwidth in MBps
>   * @delta_bw:	Difference between the current and previous bandwidth
>   * @delta_comp:	Indicates whether to compute the delta_bw
> + * @soft_rmid_bytes: Recent bandwidth count in bytes when using soft RMIDs
>   */
>  struct mbm_state {
> -	u64	prev_bw_bytes;
> -	u32	prev_bw;
> -	u32	delta_bw;
> -	bool	delta_comp;
> +	u64		prev_bw_bytes;
> +	u32		prev_bw;
> +	u32		delta_bw;
> +	bool		delta_comp;
> +	atomic64_t	soft_rmid_bytes;
>  };
>  
>  /**
> diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c
> index 2de8397f91cd..3671100d3cc7 100644
> --- a/arch/x86/kernel/cpu/resctrl/monitor.c
> +++ b/arch/x86/kernel/cpu/resctrl/monitor.c
> @@ -404,6 +404,84 @@ static struct mbm_state *get_mbm_state(struct rdt_domain *d, u32 rmid,
>  	}
>  }
>  
> +struct mbm_soft_counter {
> +	u64	prev_bytes;
> +	bool	initialized;
> +};
> +
> +struct mbm_flush_state {
> +	struct mbm_soft_counter local;
> +	struct mbm_soft_counter total;
> +};
> +
> +DEFINE_PER_CPU(struct mbm_flush_state, flush_state);
> +

Why not use the existing MBM state? 

> +/*
> + * flushes the value of the cpu_rmid to the current soft rmid
> + */
> +static void __mbm_flush(int evtid, struct rdt_resource *r, struct rdt_domain *d)
> +{
> +	struct mbm_flush_state *state = this_cpu_ptr(&flush_state);
> +	u32 soft_rmid = this_cpu_ptr(&pqr_state)->cur_rmid;
> +	u32 hw_rmid = this_cpu_ptr(&pqr_state)->hw_rmid;
> +	struct mbm_soft_counter *counter;
> +	struct mbm_state *m;
> +	u64 val;
> +
> +	/* cache occupancy events are disabled in this mode */
> +	WARN_ON(!is_mbm_event(evtid));

If this is hit it would trigger a lot, perhaps WARN_ON_ONCE?

> +
> +	if (evtid == QOS_L3_MBM_LOCAL_EVENT_ID) {
> +		counter = &state->local;
> +	} else {
> +		WARN_ON(evtid != QOS_L3_MBM_TOTAL_EVENT_ID);
> +		counter = &state->total;
> +	}
> +
> +	/*
> +	 * Propagate the value read from the hw_rmid assigned to the current CPU
> +	 * into the "soft" rmid associated with the current task or CPU.
> +	 */
> +	m = get_mbm_state(d, soft_rmid, evtid);
> +	if (!m)
> +		return;
> +
> +	if (resctrl_arch_rmid_read(r, d, hw_rmid, evtid, &val))
> +		return;
> +

This all seems unsafe to run without protection. The code relies on
the rdt_domain but a CPU hotplug event could result in the domain
disappearing underneath this code. The accesses to the data structures
also appear unsafe to me. Note that resctrl_arch_rmid_read() updates
the architectural MBM state and this same state can be updated concurrently
in other code paths without appropriate locking.

> +	/* Count bandwidth after the first successful counter read. */
> +	if (counter->initialized) {
> +		/* Assume that mbm_update() will prevent double-overflows. */
> +		if (val != counter->prev_bytes)
> +			atomic64_add(val - counter->prev_bytes,
> +				     &m->soft_rmid_bytes);
> +	} else {
> +		counter->initialized = true;
> +	}
> +
> +	counter->prev_bytes = val;

I notice a lot of similarities between the above and the software controller,
see mbm_bw_count(). 

> +}
> +
> +/*
> + * Called from context switch code __resctrl_sched_in() when the current soft
> + * RMID is changing or before reporting event counts to user space.
> + */
> +void resctrl_mbm_flush_cpu(void)
> +{
> +	struct rdt_resource *r = &rdt_resources_all[RDT_RESOURCE_L3].r_resctrl;
> +	int cpu = smp_processor_id();
> +	struct rdt_domain *d;
> +
> +	d = get_domain_from_cpu(cpu, r);
> +	if (!d)
> +		return;
> +
> +	if (is_mbm_local_enabled())
> +		__mbm_flush(QOS_L3_MBM_LOCAL_EVENT_ID, r, d);
> +	if (is_mbm_total_enabled())
> +		__mbm_flush(QOS_L3_MBM_TOTAL_EVENT_ID, r, d);
> +}

This (potentially) adds two MSR writes and two MSR reads to what could possibly
be quite slow MSRs if it was not designed to be used in context switch. Do you
perhaps have data on how long these MSR reads/writes take on these systems to get
an idea about the impact on context switch? I think this data should feature
prominently in the changelog.

> +
>  static int __mon_event_count(u32 rmid, struct rmid_read *rr)
>  {
>  	struct mbm_state *m;


Reinette

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ