[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5e6a2e0a-6f31-c9b0-5eec-346fd072d286@intel.com>
Date: Fri, 31 Mar 2023 16:25:40 -0700
From: Reinette Chatre <reinette.chatre@...el.com>
To: James Morse <james.morse@....com>, <x86@...nel.org>,
<linux-kernel@...r.kernel.org>
CC: Fenghua Yu <fenghua.yu@...el.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
H Peter Anvin <hpa@...or.com>,
Babu Moger <Babu.Moger@....com>,
<shameerali.kolothum.thodi@...wei.com>,
D Scott Phillips OS <scott@...amperecomputing.com>,
<carl@...amperecomputing.com>, <lcherian@...vell.com>,
<bobo.shaobowang@...wei.com>, <tan.shaopeng@...itsu.com>,
<xingxin.hx@...nanolis.org>, <baolin.wang@...ux.alibaba.com>,
Jamie Iles <quic_jiles@...cinc.com>,
Xin Hao <xhao@...ux.alibaba.com>, <peternewman@...gle.com>
Subject: Re: [PATCH v3 09/19] x86/resctrl: Queue mon_event_read() instead of
sending an IPI
Hi James,
On 3/20/2023 10:26 AM, James Morse wrote:
> x86 is blessed with an abundance of monitors, one per RMID, that can be
> read from any CPU in the domain. MPAMs monitors reside in the MMIO MSC,
> the number implemented is up to the manufacturer. This means when there are
> fewer monitors than needed, they need to be allocated and freed.
>
> Worse, the domain may be broken up into slices, and the MMIO accesses
> for each slice may need performing from different CPUs.
>
> These two details mean MPAMs monitor code needs to be able to sleep, and
> IPI another CPU in the domain to read from a resource that has been sliced.
>
> mon_event_read() already invokes mon_event_count() via IPI, which means
> this isn't possible. On systems using nohz-full, some CPUs need to be
> interrupted to run kernel work as they otherwise stay in user-space
> running realtime workloads. Interrupting these CPUs should be avoided,
> and scheduling work on them may never complete.
>
> Change mon_event_read() to pick a housekeeping CPU, (one that is not using
> nohz_full) and schedule mon_event_count() and wait. If all the CPUs
> in a domain are using nohz-full, then an IPI is used as the fallback.
It is not clear to me where in this solution an IPI is used as fallback ...
(see below)
> + int cpu;
> +
> + /* When picking a CPU from cpu_mask, ensure it can't race with cpuhp */
> + lockdep_assert_held(&rdtgroup_mutex);
> +
> /*
> - * setup the parameters to send to the IPI to read the data.
> + * setup the parameters to pass to mon_event_count() to read the data.
> */
> rr->rgrp = rdtgrp;
> rr->evtid = evtid;
> @@ -537,7 +543,16 @@ void mon_event_read(struct rmid_read *rr, struct rdt_resource *r,
> rr->val = 0;
> rr->first = first;
>
> - smp_call_function_any(&d->cpu_mask, mon_event_count, rr, 1);
> + cpu = get_cpu();
> + if (cpumask_test_cpu(cpu, &d->cpu_mask)) {
> + mon_event_count(rr);
> + put_cpu();
> + } else {
> + put_cpu();
> +
> + cpu = cpumask_any_housekeeping(&d->cpu_mask);
> + smp_call_on_cpu(cpu, mon_event_count, rr, false);
> + }
> }
>
... from what I can tell there is no IPI fallback here. As per previous
patch I understand cpumask_any_housekeeping() could still return
a nohz_full CPU and calling smp_call_on_cpu() on it would not send
an IPI but instead queue the work to it. What did I miss?
Reinette
Powered by blists - more mailing lists