[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJhGHyDWJAZj6AYKHpJvTPdRCZa7Yvi00x1n+AtRm1qa_2PmOw@mail.gmail.com>
Date: Sun, 23 Apr 2023 11:23:28 +0800
From: Lai Jiangshan <jiangshanlai@...il.com>
To: Tejun Heo <tj@...nel.org>
Cc: torvalds@...ux-foundation.org, peterz@...radead.org,
linux-kernel@...r.kernel.org, kernel-team@...a.com
Subject: Re: [PATCH 4/5] workqueue: Automatically mark CPU-hogging work items CPU_INTENSIVE
On Wed, Apr 19, 2023 at 4:52 AM Tejun Heo <tj@...nel.org> wrote:
>
> If a per-cpu work item hogs the CPU, it can prevent other work items from
> starting through concurrency management. A per-cpu workqueue which intends
> to host such CPU-hogging work items can choose to not participate in
> concurrency management by setting %WQ_CPU_INTENSIVE; however, this can be
> error-prone and difficult to debug when missed.
>
> This patch adds an automatic CPU usage based detection. If a
> concurrency-managed work item consumes more CPU time than the threshold (5ms
> by default), it's marked CPU_INTENSIVE automatically on schedule-out.
>
> The mechanism isn't foolproof in that the 5ms detection delays can add up if
> many CPU-hogging work items are queued at the same time. However, in such
> situations, the bigger problem likely is the CPU being saturated with
> per-cpu work items and the solution would be making them UNBOUND.
>
> For occasional CPU hogging, the new automatic mechanism should provide
> reasonable protection with minimal increase in code complexity.
>
> Signed-off-by: Tejun Heo <tj@...nel.org>
> Cc: Linus Torvalds <torvalds@...ux-foundation.org>
> Cc: Lai Jiangshan <jiangshanlai@...il.com>
> ---
> kernel/workqueue.c | 77 ++++++++++++++++++++++++++-----------
> kernel/workqueue_internal.h | 1 +
> 2 files changed, 56 insertions(+), 22 deletions(-)
>
> diff --git a/kernel/workqueue.c b/kernel/workqueue.c
> index b9e8dc54272d..d24b887ddd86 100644
> --- a/kernel/workqueue.c
> +++ b/kernel/workqueue.c
> @@ -306,6 +306,14 @@ static struct kmem_cache *pwq_cache;
> static cpumask_var_t *wq_numa_possible_cpumask;
> /* possible CPUs of each node */
>
> +/*
> + * Per-cpu work items which run for longer than the following threshold are
> + * automatically considered CPU intensive and excluded from concurrency
> + * management to prevent them from noticeably delaying other per-cpu work items.
> + */
> +static unsigned long wq_cpu_intensive_thresh_us = 5000;
> +module_param_named(cpu_intensive_thresh_us, wq_cpu_intensive_thresh_us, ulong, 0644);
> +
> static bool wq_disable_numa;
> module_param_named(disable_numa, wq_disable_numa, bool, 0444);
>
> @@ -951,9 +959,6 @@ void wq_worker_stopping(struct task_struct *task)
> struct worker *worker = kthread_data(task);
> struct worker_pool *pool;
>
> - if (task_is_running(task))
> - return;
> -
> /*
> * Rescuers, which may not have all the fields set up like normal
> * workers, also reach here, let's not access anything before
> @@ -964,24 +969,49 @@ void wq_worker_stopping(struct task_struct *task)
>
> pool = worker->pool;
>
> - /* Return if preempted before wq_worker_running() was reached */
> - if (worker->sleeping)
> - return;
> + if (task_is_running(task)) {
> + /*
> + * Concurrency-managed @worker is still RUNNING. See if the
> + * current work is hogging CPU stalling other per-cpu work
> + * items. If so, mark @worker CPU_INTENSIVE to exclude it from
> + * concurrency management. @worker->current_* are stable as they
> + * can only be modified by @task which is %current.
Hello
wq_worker_stopping() and sched_submit_work() are only called from
schedule() and are not called for other various kinds of scheduling,
such as schedule_rtlock(), preempt_schedule_*(), __cond_resched().
A work item hogging CPU may not call the bare schedule(). To make
the new wq_worker_stopping() works, it has to be added to other kinds
of scheduling, IMO.
Thanks
Lai
Powered by blists - more mailing lists