[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <94d075e0-6703-449f-9c0c-8e5973349dff@intel.com>
Date: Mon, 21 Jul 2025 16:35:15 -0700
From: Reinette Chatre <reinette.chatre@...el.com>
To: Babu Moger <babu.moger@....com>, <corbet@....net>, <tony.luck@...el.com>,
<Dave.Martin@....com>, <james.morse@....com>, <tglx@...utronix.de>,
<mingo@...hat.com>, <bp@...en8.de>, <dave.hansen@...ux.intel.com>
CC: <x86@...nel.org>, <hpa@...or.com>, <akpm@...ux-foundation.org>,
<paulmck@...nel.org>, <rostedt@...dmis.org>, <Neeraj.Upadhyay@....com>,
<david@...hat.com>, <arnd@...db.de>, <fvdl@...gle.com>, <seanjc@...gle.com>,
<thomas.lendacky@....com>, <pawan.kumar.gupta@...ux.intel.com>,
<yosry.ahmed@...ux.dev>, <sohil.mehta@...el.com>, <xin@...or.com>,
<kai.huang@...el.com>, <xiaoyao.li@...el.com>, <peterz@...radead.org>,
<me@...aill.net>, <mario.limonciello@....com>, <xin3.li@...el.com>,
<ebiggers@...gle.com>, <ak@...ux.intel.com>, <chang.seok.bae@...el.com>,
<andrew.cooper3@...rix.com>, <perry.yuan@....com>,
<linux-doc@...r.kernel.org>, <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v7 05/10] fs/resctrl: Update bit_usage to reflect io_alloc
Hi Babu,
On 7/10/25 10:16 AM, Babu Moger wrote:
> When the io_alloc feature is enabled, a portion of the cache can be
> configured for shared use between hardware and software.
>
> Update the bit_usage representation to reflect the io_alloc configuration.
This patch is early in the series but also relies on capabilities added
later in the series. This cryptic changelog leaves a lot for the user
to decipher. For example:
- How is the "io_alloc CLOSID" chosen? This is mentioned in cover letter but
not here. Doing so here may help explain the hardcoding of CDP_CODE done
in this patch (which seem unnecessary, more later, but a lot depends on
changes that follow this patch).
- No mention that when io_alloc is enabled then an associated CLOSID will
be allocated. This is done later in series but assumed to be known here
where rdt_bit_usage_show() implicitly assumes that when io_alloc is enabled
then the "io_alloc CLOSID" is supported AND allocated (otherwise array overrun?)
>
> Signed-off-by: Babu Moger <babu.moger@....com>
> ---
> v7: New patch split from earlier patch #5.
> Added resctrl_io_alloc_closid() to return max COSID.
> ---
> Documentation/filesystems/resctrl.rst | 20 ++++++++++-----
> fs/resctrl/rdtgroup.c | 37 +++++++++++++++++++++++++--
> 2 files changed, 49 insertions(+), 8 deletions(-)
>
> diff --git a/Documentation/filesystems/resctrl.rst b/Documentation/filesystems/resctrl.rst
> index c7949dd44f2f..c3c412733632 100644
> --- a/Documentation/filesystems/resctrl.rst
> +++ b/Documentation/filesystems/resctrl.rst
> @@ -89,12 +89,20 @@ related to allocation:
> must be set when writing a mask.
>
> "shareable_bits":
> - Bitmask of shareable resource with other executing
> - entities (e.g. I/O). User can use this when
> - setting up exclusive cache partitions. Note that
> - some platforms support devices that have their
> - own settings for cache use which can over-ride
> - these bits.
> + Bitmask of shareable resource with other executing entities
> + (e.g. I/O). Applies to all instances of this resource. User
> + can use this when setting up exclusive cache partitions.
> + Note that some platforms support devices that have their
> + own settings for cache use which can over-ride these bits.
> +
> + When "io_alloc" feature is enabled, a portion of the cache
> + can be configured for shared use between hardware and software.
To help distinguish how "io_alloc" is different from "shareable_bits" this can be:
When "io_alloc" is enabled, a portion of each cache instance can
be configured for shared use between hardware and software.
Please merge these two paragraphs.
> + "bit_usage" should be used to see which portions of each cache
> + instance is configured for hardware use via "io_alloc" feature
> + because every cache instance can have its "io_alloc" bitmask
> + configured independently.
append " via "io_alloc_cbm"."?
(but io_alloc_cbm does not exist at this point ... another motivation for this
patch to move later)
> +
> "bit_usage":
> Annotated capacity bitmasks showing how all
> instances of the resource are used. The legend is:
Please note that this "bit_usage" section contain several references to "shareable_bits"
that should be updated to refer to "io_alloc_cbm" also.
With all these new terms introduced as common knowledge it is starting to look
like this patch would be easier to consume later in the series.
> diff --git a/fs/resctrl/rdtgroup.c b/fs/resctrl/rdtgroup.c
> index 77d08229d855..a2eea85aecc8 100644
> --- a/fs/resctrl/rdtgroup.c
> +++ b/fs/resctrl/rdtgroup.c
> @@ -1030,6 +1030,20 @@ static int rdt_shareable_bits_show(struct kernfs_open_file *of,
> return 0;
> }
>
> +/*
> + * resctrl_io_alloc_closid() - io_alloc feature routes I/O traffic using
> + * the highest available CLOSID. Retrieve the maximum CLOSID supported by the
> + * resource. Note that if Code Data Prioritization (CDP) is enabled, the number
> + * of available CLOSIDs is reduced by half.
> + */
> +static u32 resctrl_io_alloc_closid(struct rdt_resource *r)
Please move to ctrlmondata.c.
> +{
> + if (resctrl_arch_get_cdp_enabled(r->rid))
> + return resctrl_arch_get_num_closid(r) / 2 - 1;
> + else
> + return resctrl_arch_get_num_closid(r) - 1;
> +}
> +
> /*
> * rdt_bit_usage_show - Display current usage of resources
> *
> @@ -1063,15 +1077,17 @@ static int rdt_bit_usage_show(struct kernfs_open_file *of,
>
> cpus_read_lock();
> mutex_lock(&rdtgroup_mutex);
> - hw_shareable = r->cache.shareable_bits;
> list_for_each_entry(dom, &r->ctrl_domains, hdr.list) {
> if (sep)
> seq_putc(seq, ';');
> + hw_shareable = r->cache.shareable_bits;
> sw_shareable = 0;
> exclusive = 0;
> seq_printf(seq, "%d=", dom->hdr.id);
> for (i = 0; i < closids_supported(); i++) {
> - if (!closid_allocated(i))
> + if (!closid_allocated(i) ||
> + (resctrl_arch_get_io_alloc_enabled(r) &&
> + i == resctrl_io_alloc_closid(r)))
> continue;
> ctrl_val = resctrl_arch_get_config(r, dom, i,
> s->conf_type);
> @@ -1099,6 +1115,23 @@ static int rdt_bit_usage_show(struct kernfs_open_file *of,
> break;
> }
> }
> +
> + /*
> + * When the "io_alloc" feature is enabled, a portion of the
> + * cache is configured for shared use between hardware and software.
> + */
> + if (resctrl_arch_get_io_alloc_enabled(r)) {
Here is undocumented implicit assumption that when io_alloc is enabled then
the "io_alloc CLOSID" is allocated. This is also outside the closids_supported()
loop which adds the other implicit assumption that if io_alloc is enabled then
its CLOSID is supported by resctrl fs. None of these concepts have been introduced
so far and is not mentioned in changelog.
It is not obvious here that an io_alloc CLOSID must be supported (this is just
something enforced by later patches) and also not obvious that an io_alloc CLOSID
must be allocated from same "pool" as other CLOSIDs. Without a good changelog and
context of later patches this is hard to understand.
These are motivations for this patch to move later in the series and then the
changelog can just refer to these assumptions as fact, making it all easier to follow.
> + if (resctrl_arch_get_cdp_enabled(r->rid))
> + ctrl_val = resctrl_arch_get_config(r, dom,
> + resctrl_io_alloc_closid(r),
> + CDP_CODE);
> + else
> + ctrl_val = resctrl_arch_get_config(r, dom,
> + resctrl_io_alloc_closid(r),
> + CDP_NONE);
This does not look necessary to me. Why not just:
if (resctrl_arch_get_io_alloc_enabled(r)) {
ctrl_val = resctrl_arch_get_config(r, dom,
resctrl_io_alloc_closid(r),
s->conf_type);
hw_shareable |= ctrl_val;
}
Since the later patches keep the CDP_CODE and CDP_DATA CBMs in sync it does not matter
if the io_alloc CBM is obtained from CDP_CODE or CDP_DATA and just providing the
schema's type to resctrl_arch_get_config() will have it retrieve the right CBM, no?
This may also be easier to understand/claim if this patch is later in the series.
> + hw_shareable |= ctrl_val;
> + }
> +
> for (i = r->cache.cbm_len - 1; i >= 0; i--) {
> pseudo_locked = dom->plr ? dom->plr->cbm : 0;
> hwb = test_bit(i, &hw_shareable);
Reinette
Powered by blists - more mailing lists