[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e2cf2e42-8a0f-47a4-8c05-8876272275fd@amd.com>
Date: Thu, 31 Jul 2025 13:51:52 -0500
From: "Moger, Babu" <babu.moger@....com>
To: Reinette Chatre <reinette.chatre@...el.com>, corbet@....net,
tony.luck@...el.com, Dave.Martin@....com, james.morse@....com,
tglx@...utronix.de, mingo@...hat.com, bp@...en8.de,
dave.hansen@...ux.intel.com
Cc: x86@...nel.org, hpa@...or.com, akpm@...ux-foundation.org,
paulmck@...nel.org, rostedt@...dmis.org, Neeraj.Upadhyay@....com,
david@...hat.com, arnd@...db.de, fvdl@...gle.com, seanjc@...gle.com,
thomas.lendacky@....com, pawan.kumar.gupta@...ux.intel.com,
yosry.ahmed@...ux.dev, sohil.mehta@...el.com, xin@...or.com,
kai.huang@...el.com, xiaoyao.li@...el.com, peterz@...radead.org,
me@...aill.net, mario.limonciello@....com, xin3.li@...el.com,
ebiggers@...gle.com, ak@...ux.intel.com, chang.seok.bae@...el.com,
andrew.cooper3@...rix.com, perry.yuan@....com, linux-doc@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v7 06/10] fs/resctrl: Introduce interface to display
"io_alloc" support
Hi Reinette,
On 7/21/25 18:36, Reinette Chatre wrote:
> Hi Babu,
>
> On 7/10/25 10:16 AM, Babu Moger wrote:
>> "io_alloc" feature in resctrl allows direct insertion of data from I/O
>> devices into the cache.
>>
>> Introduce the 'io_alloc' resctrl file to indicate the support for the
>> feature.
>>
>> Signed-off-by: Babu Moger <babu.moger@....com>
>> ---
>
> ...
>
>> ---
>> Documentation/filesystems/resctrl.rst | 25 +++++++++++++++++
>> fs/resctrl/rdtgroup.c | 39 +++++++++++++++++++++++++++
>> 2 files changed, 64 insertions(+)
>>
>> diff --git a/Documentation/filesystems/resctrl.rst b/Documentation/filesystems/resctrl.rst
>> index c3c412733632..354e6a00fa45 100644
>> --- a/Documentation/filesystems/resctrl.rst
>> +++ b/Documentation/filesystems/resctrl.rst
>> @@ -143,6 +143,31 @@ related to allocation:
>> "1":
>> Non-contiguous 1s value in CBM is supported.
>>
>> +"io_alloc":
>> + "io_alloc" enables system software to configure the portion of
>> + the cache allocated for I/O traffic. File may only exist if the
>> + system supports this feature on some of its cache resources.
>> +
>> + "disabled":
>> + Portions of cache used for allocation of I/O traffic
>> + cannot be configured.
>> + "enabled":
>> + Portions of cache used for allocation of I/O traffic
>> + can be configured using "io_alloc_cbm".
>> + "not supported":
>> + Support not available on the system.
>
> "Support not available on the system." -> "Support not available for this resource."?
Sure.
>
>> +
>> + The underlying implementation may reduce resources available to
>> + general (CPU) cache allocation. See architecture specific notes
>> + below. Depending on usage requirements the feature can be enabled
>> + or disabled:
>
> "disabled:" -> "disabled."?
Sure.
>
>> +
>> + On AMD systems, the io_alloc feature is supported by the L3 Smart
>> + Data Cache Injection Allocation Enforcement (SDCIAE). The CLOSID for
>> + io_alloc is determined by the highest CLOSID supported by the resource.
>
> "is determined by the" -> "is the"?
>
Sure.
> To make clear connection with previous paragraph you can append something like:
> When io_alloc is enabled on an AMD system the highest CLOSID is dedicated to
> io_alloc and no longer available for general (CPU) cache allocation.
Sure.
>
>> + When CDP is enabled, io_alloc routes I/O traffic using the highest
>> + CLOSID allocated for the instruction cache (L3CODE).
>
> To clear up what happens with L3DATA, what do you think of appending something like:
> , making this CLOSID no longer available for general (CPU) cache
> allocation for both the L3CODE and L3DATA resources.
>
Sure.
>> +
>> Memory bandwidth(MB) subdirectory contains the following files
>> with respect to allocation:
>>
>> diff --git a/fs/resctrl/rdtgroup.c b/fs/resctrl/rdtgroup.c
>> index a2eea85aecc8..d7c4417b4516 100644
>> --- a/fs/resctrl/rdtgroup.c
>> +++ b/fs/resctrl/rdtgroup.c
>> @@ -1836,6 +1836,28 @@ static ssize_t mbm_local_bytes_config_write(struct kernfs_open_file *of,
>> return ret ?: nbytes;
>> }
>>
>> +static int resctrl_io_alloc_show(struct kernfs_open_file *of,
>
> Please move to ctrlmondata.c
Yes.
>
>
>> + struct seq_file *seq, void *v)
>> +{
>> + struct resctrl_schema *s = rdt_kn_parent_priv(of->kn);
>> + struct rdt_resource *r = s->res;
>> +
>> + mutex_lock(&rdtgroup_mutex);
>> +
>> + if (r->cache.io_alloc_capable) {
>> + if (resctrl_arch_get_io_alloc_enabled(r))
>> + seq_puts(seq, "enabled\n");
>> + else
>> + seq_puts(seq, "disabled\n");
>> + } else {
>> + seq_puts(seq, "not supported\n");
>> + }
>> +
>> + mutex_unlock(&rdtgroup_mutex);
>> +
>> + return 0;
>> +}
>> +
>> /* rdtgroup information files for one cache resource. */
>> static struct rftype res_common_files[] = {
>> {
>> @@ -1926,6 +1948,12 @@ static struct rftype res_common_files[] = {
>> .kf_ops = &rdtgroup_kf_single_ops,
>> .seq_show = rdt_thread_throttle_mode_show,
>> },
>> + {
>> + .name = "io_alloc",
>> + .mode = 0444,
>> + .kf_ops = &rdtgroup_kf_single_ops,
>> + .seq_show = resctrl_io_alloc_show,
>> + },
>> {
>> .name = "max_threshold_occupancy",
>> .mode = 0644,
>> @@ -2095,6 +2123,15 @@ static void thread_throttle_mode_init(void)
>> RFTYPE_CTRL_INFO | RFTYPE_RES_MB);
>> }
>>
>> +static void io_alloc_init(void)
>
> This function's comment can benefit from a snippet that highlights that
> even though this operates on hardcoded L3 resource it results in this file
> being visible for *all* cache resources (eg. L2 cache also), whether they
> support io_alloc or not.
Added the comment.
>
>> +{
>> + struct rdt_resource *r = resctrl_arch_get_resource(RDT_RESOURCE_L3);
>> +
>> + if (r->cache.io_alloc_capable)
>> + resctrl_file_fflags_init("io_alloc", RFTYPE_CTRL_INFO |
>> + RFTYPE_RES_CACHE);
>> +}
>> +
>> void resctrl_file_fflags_init(const char *config, unsigned long fflags)
>> {
>> struct rftype *rft;
>> @@ -4282,6 +4319,8 @@ int resctrl_init(void)
>>
>> thread_throttle_mode_init();
>>
>> + io_alloc_init();
>> +
>> ret = resctrl_mon_resource_init();
>> if (ret)
>> return ret;
>
> Reinette
>
--
Thanks
Babu Moger
Powered by blists - more mailing lists