[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e4a77e0c-a31c-429b-9de9-3cadd704ca34@intel.com>
Date: Tue, 5 Dec 2023 13:57:35 -0800
From: Reinette Chatre <reinette.chatre@...el.com>
To: Peter Newman <peternewman@...gle.com>
CC: Fenghua Yu <fenghua.yu@...el.com>, Babu Moger <babu.moger@....com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>, <x86@...nel.org>,
"H. Peter Anvin" <hpa@...or.com>,
Stephane Eranian <eranian@...gle.com>,
James Morse <james.morse@....com>,
<linux-kernel@...r.kernel.org>, <linux-kselftest@...r.kernel.org>
Subject: Re: [PATCH v1 3/9] x86/resctrl: Add resctrl_mbm_flush_cpu() to
collect CPUs' MBM events
Hi Peter,
On 12/1/2023 12:56 PM, Peter Newman wrote:
> Hi Reinette,
>
> On Tue, May 16, 2023 at 5:06 PM Reinette Chatre
> <reinette.chatre@...el.com> wrote:
>> On 5/15/2023 7:42 AM, Peter Newman wrote:
>>>
>>> I used a simple parent-child pipe loop benchmark with the parent in
>>> one monitoring group and the child in another to trigger 2M
>>> context-switches on the same CPU and compared the sample-based
>>> profiles on an AMD and Intel implementation. I used perf diff to
>>> compare the samples between hard and soft RMID switches.
>>>
>>> Intel(R) Xeon(R) Platinum 8173M CPU @ 2.00GHz:
>>>
>>> +44.80% [kernel.kallsyms] [k] __rmid_read
>>> 10.43% -9.52% [kernel.kallsyms] [k] __switch_to
>>>
>>> AMD EPYC 7B12 64-Core Processor:
>>>
>>> +28.27% [kernel.kallsyms] [k] __rmid_read
>>> 13.45% -13.44% [kernel.kallsyms] [k] __switch_to
>>>
>>> Note that a soft RMID switch that doesn't change CLOSID skips the
>>> PQR_ASSOC write completely, so from this data I can roughly say that
>>> __rmid_read() is a little over 2x the length of a PQR_ASSOC write that
>>> changes the current RMID on the AMD implementation and about 4.5x
>>> longer on Intel.
>>>
>>> Let me know if this clarifies the cost enough or if you'd like to also
>>> see instrumented measurements on the individual WRMSR/RDMSR
>>> instructions.
>>
>> I can see from the data the portion of total time spent in __rmid_read().
>> It is not clear to me what the impact on a context switch is. Is it
>> possible to say with this data that: this solution makes a context switch
>> x% slower?
>>
>> I think it may be optimistic to view this as a replacement of a PQR write.
>> As you point out, that requires that a CPU switches between tasks with the
>> same CLOSID. You demonstrate that resctrl already contributes a significant
>> delay to __switch_to - this work will increase that much more, it has to
>> be clear about this impact and motivate that it is acceptable.
>
> We were operating under the assumption that if the overhead wasn't
> acceptable, we would have heard complaints about it by now, but we
> ultimately learned that this feature wasn't deployed as much as we had
> originally thought on AMD hardware and that the overhead does need to
> be addressed.
>
> I am interested in your opinion on two options I'm exploring to
> mitigate the overhead, both of which depend on an API like the one
> Babu recently proposed for the AMD ABMC feature [1], where a new file
> interface will allow the user to indicate which mon_groups are
> actively being measured. I will refer to this as "assigned" for now,
> as that's the current proposal.
>
> The first is likely the simpler approach: only read MBM event counters
> which have been marked as "assigned" in the filesystem to avoid paying
> the context switch cost on tasks in groups which are not actively
> being measured. In our use case, we calculate memory bandwidth on
> every group every few minutes by reading the counters twice, 5 seconds
> apart. We would just need counters read during this 5-second window.
I assume that tasks within a monitoring group can be scheduled on any
CPU and from the cover letter of this work I understand that only an
RMID assigned to a processor can be guaranteed to be tracked by hardware.
Are you proposing for this option that you keep this "soft RMID" approach
with CPUs permanently assigned a "hard RMID" but only update the counts for a
"soft RMID" that is "assigned"? I think that means that the context
switch cost for the monitored group would increase even more than with the
implementation in this series since the counters need to be read on context
switch in as well as context switch out.
If I understand correctly then only one monitoring group can be measured
at a time. If such a measurement takes 5 seconds then theoretically 12 groups
can be measured in one minute. It may be possible to create many more
monitoring groups than this. Would it be possible to reach monitoring
goals in your environment?
>
> The second involves avoiding the situation where a hardware counter
> could be deallocated: Determine the number of simultaneous RMIDs
> supported, reduce the effective number of RMIDs available to that
> number. Use the default RMID (0) for all "unassigned" monitoring
hmmm ... so on the one side there is "only the RMID within the PQR
register can be guaranteed to be tracked by hardware" and on the
other side there is "A given implementation may have insufficient
hardware to simultaneously track the bandwidth for all RMID values
that the hardware supports."
>From the above there seems to be something in the middle where
some subset of the RMID values supported by hardware can be used
to simultaneously track bandwidth? How can it be determined
what this number of RMID values is?
> groups and report "Unavailable" on all counter reads (and address the
> default monitoring group's counts being unreliable). When assigned,
> attempt to allocate one of the remaining, usable RMIDs to that group.
> It would only be possible to assign all event counters (local, total,
> occupancy) at the same time. Using this approach, we would no longer
> be able to measure all groups at the same time, but this is something
> we would already be accepting when using the AMD ABMC feature.
It may be possible to turn this into a "fake"/"software" ABMC feature,
which I expect needs to be renamed to move it away from a hardware
specific feature to something that better reflects how user interacts
with system and how the system responds.
>
> While the second feature is a lot more disruptive at the filesystem
> layer, it does eliminate the added context switch overhead. Also, it
Which changes to filesystem layer are you anticipating?
> may be helpful in the long run for the filesystem code to start taking
> a more abstract view of hardware monitoring resources, given that few
> implementations can afford to assign hardware to all monitoring IDs
> all the time. In both cases, the meaning of "assigned" could vary
> greatly, even among AMD implementations.
>
> Thanks!
> -Peter
>
> [1] https://lore.kernel.org/lkml/20231201005720.235639-1-babu.moger@amd.com/
Reinette
Powered by blists - more mailing lists