[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aPFi5eQt1CmYXg_X@agluck-desk3>
Date: Thu, 16 Oct 2025 14:25:57 -0700
From: "Luck, Tony" <tony.luck@...el.com>
To: Jonathan Perry <yonch@...ch.com>
CC: Reinette Chatre <reinette.chatre@...el.com>,
<linux-kernel@...r.kernel.org>, <linux-kselftest@...r.kernel.org>,
<linux-doc@...r.kernel.org>, Jonathan Corbet <corbet@....net>, James Morse
<james.morse@....com>, Roman Storozhenko <romeusmeister@...il.com>
Subject: Re: [PATCH 0/8] resctrl: Add perf PMU for resctrl monitoring
On Thu, Oct 16, 2025 at 09:46:48AM -0500, Jonathan Perry wrote:
> Motivation: perf support enables measuring cache occupancy and memory
> bandwidth metrics on hrtimer (high resolution timer) interrupts via eBPF.
> Compared with polling from userspace, hrtimer-based reads remove
> scheduling jitter and context switch overhead. Further, PMU reads can be
> parallel, since the PMU read path need not lock resctrl's rdtgroup_mutex.
> Parallelization and reduced jitter enable more accurate snapshots of
> cache occupancy and memory bandwidth. [1] has more details on the
> motivation and design.
This parallel read without rdtgroup_mutex looks worrying.
The h/w counters have limited width (24-bits on older Intel CPUs,
32-bits on AMD and Intel >= Icelake). So resctrl takes the raw
value and in get_corrected_val() figures the increment since the
previous read of the MSR to figure out how much to add to the
running per-RMID count of "chunks".
That's all inherently full of races. If perf does this at the
same time that resctrl does, then things will be corrupted
sooner or later.
You might fix it with a per-RMID spinlock in "struct arch_mbm_state"?
-Tony
Powered by blists - more mailing lists