[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <cc656e8e-e652-baf9-7724-4507a9f7786d@intel.com>
Date: Tue, 3 Oct 2023 14:15:59 -0700
From: Reinette Chatre <reinette.chatre@...el.com>
To: James Morse <james.morse@....com>, <x86@...nel.org>,
<linux-kernel@...r.kernel.org>
CC: Fenghua Yu <fenghua.yu@...el.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
H Peter Anvin <hpa@...or.com>,
Babu Moger <Babu.Moger@....com>,
<shameerali.kolothum.thodi@...wei.com>,
D Scott Phillips OS <scott@...amperecomputing.com>,
<carl@...amperecomputing.com>, <lcherian@...vell.com>,
<bobo.shaobowang@...wei.com>, <tan.shaopeng@...itsu.com>,
<xingxin.hx@...nanolis.org>, <baolin.wang@...ux.alibaba.com>,
Jamie Iles <quic_jiles@...cinc.com>,
Xin Hao <xhao@...ux.alibaba.com>, <peternewman@...gle.com>,
<dfustini@...libre.com>, <amitsinght@...vell.com>
Subject: Re: [PATCH v6 12/24] x86/resctrl: Add cpumask_any_housekeeping() for
limbo/overflow
Hi James,
On 9/14/2023 10:21 AM, James Morse wrote:
> The limbo and overflow code picks a CPU to use from the domain's list
> of online CPUs. Work is then scheduled on these CPUs to maintain
> the limbo list and any counters that may overflow.
>
> cpumask_any() may pick a CPU that is marked nohz_full, which will
> either penalise the work that CPU was dedicated to, or delay the
> processing of limbo list or counters that may overflow. Perhaps
> indefinitely. Delaying the overflow handling will skew the bandwidth
> values calculated by mba_sc, which expects to be called once a second.
>
> Add cpumask_any_housekeeping() as a replacement for cpumask_any()
> that prefers housekeeping CPUs. This helper will still return
> a nohz_full CPU if that is the only option. The CPU to use is
> re-evaluated each time the limbo/overflow work runs. This ensures
> the work will move off a nohz_full CPU once a housekeeping CPU is
> available.
>
> Reviewed-by: Shaopeng Tan <tan.shaopeng@...itsu.com>
> Tested-by: Shaopeng Tan <tan.shaopeng@...itsu.com>
> Tested-By: Peter Newman <peternewman@...gle.com>
> Signed-off-by: James Morse <james.morse@....com>
> ---
> Changes since v3:
> * typos fixed
>
> Changes since v4:
> * Made temporary variables unsigned
>
> Changes since v5:
> * Restructured cpumask_any_housekeeping() to avoid later churn.
> ---
> arch/x86/kernel/cpu/resctrl/internal.h | 24 ++++++++++++++++++++++++
> arch/x86/kernel/cpu/resctrl/monitor.c | 17 ++++++++++++-----
> 2 files changed, 36 insertions(+), 5 deletions(-)
>
> diff --git a/arch/x86/kernel/cpu/resctrl/internal.h b/arch/x86/kernel/cpu/resctrl/internal.h
> index f06d3d3e0808..37bb3de37a4a 100644
> --- a/arch/x86/kernel/cpu/resctrl/internal.h
> +++ b/arch/x86/kernel/cpu/resctrl/internal.h
> @@ -7,6 +7,7 @@
> #include <linux/kernfs.h>
> #include <linux/fs_context.h>
> #include <linux/jump_label.h>
> +#include <linux/tick.h>
> #include <asm/resctrl.h>
>
Please maintain the empty line between groups of headers.
...
> diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c
> index 0bbed8c62d42..993837e46db1 100644
> --- a/arch/x86/kernel/cpu/resctrl/monitor.c
> +++ b/arch/x86/kernel/cpu/resctrl/monitor.c
> @@ -782,9 +782,9 @@ static void mbm_update(struct rdt_resource *r, struct rdt_domain *d,
> void cqm_handle_limbo(struct work_struct *work)
> {
> unsigned long delay = msecs_to_jiffies(CQM_LIMBOCHECK_INTERVAL);
> - int cpu = smp_processor_id();
> struct rdt_resource *r;
> struct rdt_domain *d;
> + int cpu;
>
> mutex_lock(&rdtgroup_mutex);
>
> @@ -793,8 +793,10 @@ void cqm_handle_limbo(struct work_struct *work)
>
> __check_limbo(d, false);
>
> - if (has_busy_rmid(d))
> + if (has_busy_rmid(d)) {
> + cpu = cpumask_any_housekeeping(&d->cpu_mask);
> schedule_delayed_work_on(cpu, &d->cqm_limbo, delay);
> + }
>
ok - but if you do change the CPU the worker is running on then
I also expect d->cqm_work_cpu to be updated. Otherwise the offline
code will not be able to determine if the worker needs to move.
> mutex_unlock(&rdtgroup_mutex);
> }
> @@ -804,7 +806,7 @@ void cqm_setup_limbo_handler(struct rdt_domain *dom, unsigned long delay_ms)
> unsigned long delay = msecs_to_jiffies(delay_ms);
> int cpu;
>
> - cpu = cpumask_any(&dom->cpu_mask);
> + cpu = cpumask_any_housekeeping(&dom->cpu_mask);
> dom->cqm_work_cpu = cpu;
>
> schedule_delayed_work_on(cpu, &dom->cqm_limbo, delay);
> @@ -814,10 +816,10 @@ void mbm_handle_overflow(struct work_struct *work)
> {
> unsigned long delay = msecs_to_jiffies(MBM_OVERFLOW_INTERVAL);
> struct rdtgroup *prgrp, *crgrp;
> - int cpu = smp_processor_id();
> struct list_head *head;
> struct rdt_resource *r;
> struct rdt_domain *d;
> + int cpu;
>
> mutex_lock(&rdtgroup_mutex);
>
> @@ -838,6 +840,11 @@ void mbm_handle_overflow(struct work_struct *work)
> update_mba_bw(prgrp, d);
> }
>
> + /*
> + * Re-check for housekeeping CPUs. This allows the overflow handler to
> + * move off a nohz_full CPU quickly.
> + */
> + cpu = cpumask_any_housekeeping(&d->cpu_mask);
> schedule_delayed_work_on(cpu, &d->mbm_over, delay);
>
Similar to above I expect a change like this to
be accompanied by an update to d->mbm_work_cpu.
> out_unlock:
> @@ -851,7 +858,7 @@ void mbm_setup_overflow_handler(struct rdt_domain *dom, unsigned long delay_ms)
>
> if (!static_branch_likely(&rdt_mon_enable_key))
> return;
> - cpu = cpumask_any(&dom->cpu_mask);
> + cpu = cpumask_any_housekeeping(&dom->cpu_mask);
> dom->mbm_work_cpu = cpu;
> schedule_delayed_work_on(cpu, &dom->mbm_over, delay);
> }
Reinette
Powered by blists - more mailing lists