lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ae3cbde7-1137-1b33-dbb8-ae8892abddc7@intel.com>
Date:   Tue, 9 May 2023 17:07:05 -0700
From:   Reinette Chatre <reinette.chatre@...el.com>
To:     "Luck, Tony" <tony.luck@...el.com>,
        "Yu, Fenghua" <fenghua.yu@...el.com>,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
        Dave Hansen <dave.hansen@...ux.intel.com>,
        "H. Peter Anvin" <hpa@...or.com>,
        Shaopeng Tan <tan.shaopeng@...itsu.com>,
        Jamie Iles <quic_jiles@...cinc.com>,
        James Morse <james.morse@....com>,
        "Babu Moger" <babu.moger@....com>
CC:     "x86@...nel.org" <x86@...nel.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [RFC PATCH 0/7] Add driver registration i/f to resctrl

Hi Tony,

On 5/9/2023 4:35 PM, Luck, Tony wrote:
>>> Intel has always had some model specific "RDT" features that we have
>>> not attempted to push to upstream. Rationale is that architectural features
>>> have lasting value across CPU generations. Once the code is upstream it
>>> "just works" (TM) for all time. Cluttering up core code with features that are
>>> only applicable to one or two CPU generations seemed like a recipe for
>>> disaster from a long-term maintenance perspective.
>>
>> Could you please elaborate how this seems like a "recipe for disaster"? I
>> can certainly see how removing a driver is easy when it is decided that
>> something is "end of life". I rarely see "end of life" in practice
>> though and viewing removal of obsolete code from a driver as a "disaster"
>> is not clear to me. Re-factoring code occurs frequently.
> 
> I'm thinking of the amount of code under arch/x86/kernel/cpu/resctrl. In
> v6.4-rc1 it looks like:
> 
> $ wc -l $resctrl/*.[ch]
>    996 arch/x86/kernel/cpu/resctrl/core.c
>    581 arch/x86/kernel/cpu/resctrl/ctrlmondata.c
>    560 arch/x86/kernel/cpu/resctrl/internal.h
>    845 arch/x86/kernel/cpu/resctrl/monitor.c
>   1600 arch/x86/kernel/cpu/resctrl/pseudo_lock.c
>     43 arch/x86/kernel/cpu/resctrl/pseudo_lock_event.h
>   3733 arch/x86/kernel/cpu/resctrl/rdtgroup.c
>   8358 total
> 
> Fenghua did a built-in implementation for one of the features that I'd
> like to implement as a driver and the bottom line of "git diff --stat" for
> his series of patches was:
> 
> 9 files changed, 1300 insertions(+), 10 deletions(-)
> 
> Projecting forward a few CPU generations there may be 2-3 different
> versions of that code. Plus all the other model specific features that
> we'd like to support. The core resctrl architectural code is going to
> disappear in the maze of "do this for CPU models X & Y, but do that
> for CPU model Z". 

It is hard to tell from just a diffstat how this implementation impacts
the core. A similar diffstat for the driver implementation may
help. "1300 insertions(+), 10 deletions(-)" does not seem like a lot of 
core refactoring.

>>> Q2) What about exclusive groups?
>>> A2) I didn’t try to handle in this RFC. Additional bits will be needed.
>>
>> It seems like every time a driver needs "additional bits" it would impact
>> all the other drivers.
> 
> It depends. If the new hook is just some additional callback function, then
> existing drivers would have an implied ".newfunc = NULL," in the registration
> structure. So wouldn't need any changes.
> 
> The hooks I implemented in my RFC series are the union of the requirements
> of each driver. But each driver just sets up the hooks that it needs. E.g. my
> silly example driver only used the "add files to the ctrlmon directories" hook.

My point is that the hooks themselves appear to be made safe by just providing
limited information (no pointers back to structures maintained by resctrl) and
thus when a new driver has different requirements it would have broad impact.
Similar to the example driver that you provided, if I understood correctly it
already pointed out that there may be a missing parameter of the group type
(control vs monitor).

>> The second part of my original question was "Where would it be decided whether
>> the overriding driver should be loaded and why can that logic not be in
>> enumeration within resctrl?" It is the user that needs to determine that there are
>> conflicting resources?
> 
> The horror of model specific features is appalling, or non-existent, enumeration.
> In the dim and distant past of resctrl there was once a point where it did
> sting compares of model strings against a list of specific SKUs that supported
> early RDT features.

My question was trying to understand where this logic is moved to (re. "Where would
it be decided whether the overriding driver should be loaded"). The feature will
remain model specific whether it is implemented in the core or a driver, so these
checks will need to be done somewhere, no? 

> 
>>> Q6) Domain awareness of drivers.
>>> A6) This is a challenge. Especially as the domain for a driver may not match up
>>> with any existing resource scope (e.g. driver may be socket scoped, which may
>>> not be the same as "L3 cache" scoped). After I posted this series I added
>>> an entry in the resource table with socket scope to handle this. Dynamically adding
>>> a new resource with a custom scope has challenges (because the domain lists
>>> attached to that resource are maintained by the resctrl cpu hot plug callbacks as
>>> CPUs come online and go offline.
>>
>> My comment was not about a need to make drivers "domain aware". My assumption was that
>> drivers are not domain aware since I did not see any related information shared
>> with the drivers and since the drivers override the schemata entries I thus assumed
>> that the schemata entries use some driver specific scope.
>> The challenge to add a resource with a custom scope seems like the biggest problem
>> raised thus far. Is this perhaps what started the venture down this driver interface?
> 
> New domain scopes wasn't a driving motivation, just a thing that was found along
> the implementation journey. After playing with some ways to have each driver keep
> track of scope I found that I'd replicated some of the core domain tracking cpuhp
> code and decided that juat making the core keep track of a socket scoped resource
> with call backs to the driver(s) for socket add/delete was the cleanest way to go.
> 
> That might mean asking the core to track other scopes (like "tile") in future if some
> control/measure feature has that scope. Having created a "node" scope in my
> patch series for SNC[1], it then is quite trivial to add additional resources
> with any scope needed.

I see. I thought that tracking scope was the hardest problem needing solving
in the driver.

Reinette

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ