lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <31993ea8-97e5-b8d5-b344-48db212bc9cf@intel.com>
Date:   Fri, 12 May 2023 08:26:44 -0700
From:   Reinette Chatre <reinette.chatre@...el.com>
To:     Peter Newman <peternewman@...gle.com>
CC:     Fenghua Yu <fenghua.yu@...el.com>, Babu Moger <babu.moger@....com>,
        "Thomas Gleixner" <tglx@...utronix.de>,
        Ingo Molnar <mingo@...hat.com>,
        "Borislav Petkov" <bp@...en8.de>,
        Dave Hansen <dave.hansen@...ux.intel.com>, <x86@...nel.org>,
        "H. Peter Anvin" <hpa@...or.com>,
        Stephane Eranian <eranian@...gle.com>,
        James Morse <james.morse@....com>,
        <linux-kernel@...r.kernel.org>, <linux-kselftest@...r.kernel.org>
Subject: Re: [PATCH v1 3/9] x86/resctrl: Add resctrl_mbm_flush_cpu() to
 collect CPUs' MBM events

Hi Peter,

On 5/12/2023 6:25 AM, Peter Newman wrote:
> Hi Reinette,
> 
> On Thu, May 11, 2023 at 11:37 PM Reinette Chatre
> <reinette.chatre@...el.com> wrote:
>> On 4/21/2023 7:17 AM, Peter Newman wrote:
>>> Implement resctrl_mbm_flush_cpu(), which collects a domain's current MBM
>>> event counts into its current software RMID. The delta for each CPU is
>>> determined by tracking the previous event counts in per-CPU data.  The
>>> software byte counts reside in the arch-independent mbm_state
>>> structures.
>>
>> Could you elaborate why the arch-independent mbm_state was chosen?
> 
> It largely had to do with how many soft RMIDs to implement. For our
> own needs, we were mainly concerned with getting back to the number of
> monitoring groups the hardware claimed to support, so there wasn't
> much internal motivation to support an unbounded number of soft RMIDs.

Apologies for not being explicit, I was actually curious why the
arch-independent mbm_state, as opposed to the arch-dependent state, was
chosen.

I think the lines are getting a bit blurry here with the software RMID
feature added as a resctrl filesystem feature (and thus non architectural),
but it is specific to AMD architecture. 

> However, breaking this artificial connection between supported HW and
> SW RMIDs to support arbitrarily-many monitoring groups could make the
> implementation conceptually cleaner. If you agree,  I would be happy
> to give it a try in the next series.

I have not actually considered this. At first glance I think this would
add more tentacles into the core where currently the number of RMIDs
supported are queried from the device and supporting an arbitrary number
would impact that. At this time the RMID state is also pre-allocated
and thus not possible to support an "arbitrarily-many".

>>> +/*
>>> + * Called from context switch code __resctrl_sched_in() when the current soft
>>> + * RMID is changing or before reporting event counts to user space.
>>> + */
>>> +void resctrl_mbm_flush_cpu(void)
>>> +{
>>> +     struct rdt_resource *r = &rdt_resources_all[RDT_RESOURCE_L3].r_resctrl;
>>> +     int cpu = smp_processor_id();
>>> +     struct rdt_domain *d;
>>> +
>>> +     d = get_domain_from_cpu(cpu, r);
>>> +     if (!d)
>>> +             return;
>>> +
>>> +     if (is_mbm_local_enabled())
>>> +             __mbm_flush(QOS_L3_MBM_LOCAL_EVENT_ID, r, d);
>>> +     if (is_mbm_total_enabled())
>>> +             __mbm_flush(QOS_L3_MBM_TOTAL_EVENT_ID, r, d);
>>> +}
>>
>> This (potentially) adds two MSR writes and two MSR reads to what could possibly
>> be quite slow MSRs if it was not designed to be used in context switch. Do you
>> perhaps have data on how long these MSR reads/writes take on these systems to get
>> an idea about the impact on context switch? I think this data should feature
>> prominently in the changelog.
> 
> I can probably use ftrace to determine the cost of an __rmid_read()
> call on a few implementations.

On a lower level I think it may be interesting to measure more closely
just how long a wrmsr and rdmsr take on these registers. It may be interesting
if you, for example, use rdtsc_ordered() before and after these calls, and then
compare it to how long it takes to write the PQR register that has been
designed to be used in context switch.

> To understand the overall impact to context switch, I can put together
> a scenario where I can control whether the context switches being
> measured result in change of soft RMID to prevent the data from being
> diluted by non-flushing switches.

This sounds great. Thank you very much.

Reinette

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ