[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZbQozqY9qOa4Q8KR@slm.duckdns.org>
Date: Fri, 26 Jan 2024 11:49:02 -1000
From: Tejun Heo <tj@...nel.org>
To: Leonardo Bras <leobras@...hat.com>
Cc: Lai Jiangshan <jiangshanlai@...il.com>,
Marcelo Tosatti <mtosatti@...hat.com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v1 1/1] wq: Avoid using isolated cpus' timers on
unbounded queue_delayed_work
Hello,
On Thu, Jan 25, 2024 at 10:03:20PM -0300, Leonardo Bras wrote:
..
> AS an optimization, if the current cpu is not isolated, use it's timer
^ ^
As its
> instead of looking for another candidate.
The sentence reads weird tho. It's always the same timer. We're deciding
which CPU to queue the timer on.
> @@ -1958,10 +1958,24 @@ static void __queue_delayed_work(int cpu, struct workqueue_struct *wq,
> dwork->cpu = cpu;
> timer->expires = jiffies + delay;
>
> - if (unlikely(cpu != WORK_CPU_UNBOUND))
> - add_timer_on(timer, cpu);
> - else
> - add_timer(timer);
> + if (likely(cpu == WORK_CPU_UNBOUND)) {
> + if (!housekeeping_enabled(HK_TYPE_TIMER)) {
> + /* Reuse the same timer */
This comment is confusing because it's always the same timer.
> + add_timer(timer);
> + return;
> + }
> +
> + /*
> + * If the work is cpu-unbound, and cpu isolation is in place,
> + * only use timers from housekeeping cpus.
> + * If the current cpu is a housekeeping cpu, use it instead.
> + */
> + cpu = smp_processor_id();
> + if (!housekeeping_test_cpu(cpu, HK_TYPE_TIMER))
> + cpu = housekeeping_any_cpu(HK_TYPE_TIMER);
> + }
> +
> + add_timer_on(timer, cpu);
> }
I find the control flow a bit difficult to follow. It's not the end of the
world to have two add_timer_on() calls. Would something like the following
be easier to read?
if (housekeeping_enabled(HK_TYPE_TIMER)) {
cpu = smp_processor_id();
if (!housekeeping_test_cpu(cpu, HK_TYPE_TIMER))
cpu = housekeeping_any_cpu(HK_TYPE_TIMER);
add_timer_on(timer, cpu);
} else {
if (likely(cpu == WORK_CPU_UNBOUND))
add_timer(timer, cpu);
else
add_timer_on(timer, cpu);
}
Thanks.
--
tejun
Powered by blists - more mailing lists