[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <SJ1PR11MB6083CFFA8CBDD296E966EC85FC769@SJ1PR11MB6083.namprd11.prod.outlook.com>
Date: Tue, 9 May 2023 23:35:17 +0000
From: "Luck, Tony" <tony.luck@...el.com>
To: "Chatre, Reinette" <reinette.chatre@...el.com>,
"Yu, Fenghua" <fenghua.yu@...el.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>,
"H. Peter Anvin" <hpa@...or.com>,
Shaopeng Tan <tan.shaopeng@...itsu.com>,
Jamie Iles <quic_jiles@...cinc.com>,
James Morse <james.morse@....com>,
Babu Moger <babu.moger@....com>
CC: "x86@...nel.org" <x86@...nel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: RE: [RFC PATCH 0/7] Add driver registration i/f to resctrl
>> Intel has always had some model specific "RDT" features that we have
>> not attempted to push to upstream. Rationale is that architectural features
>> have lasting value across CPU generations. Once the code is upstream it
>> "just works" (TM) for all time. Cluttering up core code with features that are
>> only applicable to one or two CPU generations seemed like a recipe for
>> disaster from a long-term maintenance perspective.
>
> Could you please elaborate how this seems like a "recipe for disaster"? I
> can certainly see how removing a driver is easy when it is decided that
> something is "end of life". I rarely see "end of life" in practice
> though and viewing removal of obsolete code from a driver as a "disaster"
> is not clear to me. Re-factoring code occurs frequently.
I'm thinking of the amount of code under arch/x86/kernel/cpu/resctrl. In
v6.4-rc1 it looks like:
$ wc -l $resctrl/*.[ch]
996 arch/x86/kernel/cpu/resctrl/core.c
581 arch/x86/kernel/cpu/resctrl/ctrlmondata.c
560 arch/x86/kernel/cpu/resctrl/internal.h
845 arch/x86/kernel/cpu/resctrl/monitor.c
1600 arch/x86/kernel/cpu/resctrl/pseudo_lock.c
43 arch/x86/kernel/cpu/resctrl/pseudo_lock_event.h
3733 arch/x86/kernel/cpu/resctrl/rdtgroup.c
8358 total
Fenghua did a built-in implementation for one of the features that I'd
like to implement as a driver and the bottom line of "git diff --stat" for
his series of patches was:
9 files changed, 1300 insertions(+), 10 deletions(-)
Projecting forward a few CPU generations there may be 2-3 different
versions of that code. Plus all the other model specific features that
we'd like to support. The core resctrl architectural code is going to
disappear in the maze of "do this for CPU models X & Y, but do that
for CPU model Z".
>> I'm also guessing that with other architectures (AMD, ARM, RISC-V)
>> all building on the resctrl foundation, some of them may also have some
>> features that don't fit neatly into the core of resctrl.
>
> They absolutely have features that don't fit neatly into the core of
> resctrl. I am not able to tell whether this style of interface would
> solve this.
I'm hoping James and others copied on this thread can chime in to
answer that.
> The second motivation from your original email was that these new features
> have "input parameters that do not fit neatly into the existing schemata
> model". It is not obvious to me where you address that, but maybe it is
> related to "Q6".
That's really tied into the specific features that aren't up for public discussion
at this point.
> I agree that we should not export the mutex. You may be interested in the work
> that James is working on separating the locks.
> https://lore.kernel.org/lkml/20230320172620.18254-1-james.morse@arm.com/
>
> It would be great if you could join these discussions to see if there are
> some common requirements that can be solved together.
Thanks for the link. I'll go off and read that thread.
>> Q2) What about exclusive groups?
>> A2) I didn’t try to handle in this RFC. Additional bits will be needed.
>
> It seems like every time a driver needs "additional bits" it would impact
> all the other drivers.
It depends. If the new hook is just some additional callback function, then
existing drivers would have an implied ".newfunc = NULL," in the registration
structure. So wouldn't need any changes.
The hooks I implemented in my RFC series are the union of the requirements
of each driver. But each driver just sets up the hooks that it needs. E.g. my
silly example driver only used the "add files to the ctrlmon directories" hook.
>> Q3) How to make visible to the driver other resctrl assumptions (e.g. default
>> group is CLOSID=0, RMID=0).
>
> Actually I think resctrl assumptions should be _invisible_ to drivers.
That's a worthy goal. I'm not sure whether it can always be met. E.g. for
the default CLOSID/RMID case the driver will want to initialize to some
useful default because receiving callbacks from the core resctrl code as
the user writes to files to set up a specific configuration.
>> Q4) Suppressing schemata resources from a driver surprised you.
>> A4) This is, as you guessed, about conflicting resources. There are other ways it
>> could be handled. E.g. use existing mount options to suppress the resource from
>> the schemata. To be safe that might also need some way to fail to load of a driver
>> that needs other access to a resource unless the correct mount options are in
>> force.
>
>From above it sounds like there may be scenarios where a driver layer would still
>be accompanied by core changes (like mount options added to the core that will
>allow/deny certain drivers). If there was no driver layer it could just be handled
>in a single spot.
I threw out mount options as an alternative to suppressing features. Having
thought about it, I'm not fond of it at all.
> The second part of my original question was "Where would it be decided whether
> the overriding driver should be loaded and why can that logic not be in
> enumeration within resctrl?" It is the user that needs to determine that there are
> conflicting resources?
The horror of model specific features is appalling, or non-existent, enumeration.
In the dim and distant past of resctrl there was once a point where it did
sting compares of model strings against a list of specific SKUs that supported
early RDT features.
>> Q6) Domain awareness of drivers.
>> A6) This is a challenge. Especially as the domain for a driver may not match up
>> with any existing resource scope (e.g. driver may be socket scoped, which may
>> not be the same as "L3 cache" scoped). After I posted this series I added
>> an entry in the resource table with socket scope to handle this. Dynamically adding
>> a new resource with a custom scope has challenges (because the domain lists
>> attached to that resource are maintained by the resctrl cpu hot plug callbacks as
>> CPUs come online and go offline.
>
>My comment was not about a need to make drivers "domain aware". My assumption was that
>drivers are not domain aware since I did not see any related information shared
>with the drivers and since the drivers override the schemata entries I thus assumed
>that the schemata entries use some driver specific scope.
>The challenge to add a resource with a custom scope seems like the biggest problem
>raised thus far. Is this perhaps what started the venture down this driver interface?
New domain scopes wasn't a driving motivation, just a thing that was found along
the implementation journey. After playing with some ways to have each driver keep
track of scope I found that I'd replicated some of the core domain tracking cpuhp
code and decided that juat making the core keep track of a socket scoped resource
with call backs to the driver(s) for socket add/delete was the cleanest way to go.
That might mean asking the core to track other scopes (like "tile") in future if some
control/measure feature has that scope. Having created a "node" scope in my
patch series for SNC[1], it then is quite trivial to add additional resources
with any scope needed.
-Tony
[1] https://lore.kernel.org/all/20230126184157.27626-1-tony.luck@intel.com/ ... that
series is not forgotten. Just needs a slightly different approach which needs an internal
approval before I can post version 2 of the series.
Powered by blists - more mailing lists