[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c6a929dc-0362-499b-bdf8-7f0cb43e8402@intel.com>
Date: Wed, 22 Oct 2025 21:37:22 -0700
From: Reinette Chatre <reinette.chatre@...el.com>
To: Tony Luck <tony.luck@...el.com>, Fenghua Yu <fenghuay@...dia.com>, "Maciej
Wieczor-Retman" <maciej.wieczor-retman@...el.com>, Peter Newman
<peternewman@...gle.com>, James Morse <james.morse@....com>, Babu Moger
<babu.moger@....com>, Drew Fustini <dfustini@...libre.com>, Dave Martin
<Dave.Martin@....com>, Chen Yu <yu.c.chen@...el.com>
CC: <x86@...nel.org>, <linux-kernel@...r.kernel.org>,
<patches@...ts.linux.dev>
Subject: Re: [PATCH v12 18/31] fs/resctrl: Split L3 dependent parts out of
__mon_event_count()
Hi Tony,
On 10/13/25 3:33 PM, Tony Luck wrote:
> Almost all of the code in __mon_event_count() is specific to the RDT_RESOURCE_L3
> resource.
>
> Split it out into __l3_mon_event_count().
Missing a "why". We could perhaps write it similar to an earlier commit message:
Carve out the L3 resource specific event reading code into a separate
helper to support reading event data from a new monitoring resource.
>
> Suggested-by: Reinette Chatre <reinette.chatre@...el.com>
> Signed-off-by: Tony Luck <tony.luck@...el.com>
> ---
> @@ -529,6 +499,44 @@ static int __mon_event_count(struct rdtgroup *rdtgrp, struct rmid_read *rr)
> return ret;
> }
>
> +/*
> + * Called from preemptible context via a direct call of mon_event_count() for
> + * events that can be read on any CPU.
> + * Called from preemptible but non-migratable process context (mon_event_count()
> + * via smp_call_on_cpu()) OR non-preemptible context (mon_event_count() via
> + * smp_call_function_any()) for events that need to be read on a specific CPU.
> + */
> +static bool cpu_on_correct_domain(struct rmid_read *rr)
> +{
> + int cpu;
> +
> + /* Any CPU is OK for this event */
> + if (rr->evt->any_cpu)
> + return true;
> +
> + cpu = smp_processor_id();
> +
> + /* Single domain. Must be on a CPU in that domain. */
> + if (rr->hdr)
> + return cpumask_test_cpu(cpu, &rr->hdr->cpu_mask);
> +
> + /* Summing domains that share a cache, must be on a CPU for that cache. */
> + return cpumask_test_cpu(cpu, &rr->ci->shared_cpu_map);
> +}
> +
> +static int __mon_event_count(struct rdtgroup *rdtgrp, struct rmid_read *rr)
> +{
> + if (!cpu_on_correct_domain(rr))
> + return -EINVAL;
It is a bit subtle how cpu_on_correct_domain() contains L3 specific code.
This may be ok if one rather thinks of it as a sanity check of struct rmid_read.
Reinette
Powered by blists - more mailing lists