lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230511212358.GH2296992@hirez.programming.kicks-ass.net>
Date:   Thu, 11 May 2023 23:23:58 +0200
From:   Peter Zijlstra <peterz@...radead.org>
To:     Tejun Heo <tj@...nel.org>
Cc:     jiangshanlai@...il.com, torvalds@...ux-foundation.org,
        linux-kernel@...r.kernel.org, kernel-team@...a.com
Subject: Re: [PATCH 5/7] workqueue: Automatically mark CPU-hogging work items
 CPU_INTENSIVE

On Thu, May 11, 2023 at 08:19:29AM -1000, Tejun Heo wrote:
> If a per-cpu work item hogs the CPU, it can prevent other work items from
> starting through concurrency management. A per-cpu workqueue which intends
> to host such CPU-hogging work items can choose to not participate in
> concurrency management by setting %WQ_CPU_INTENSIVE; however, this can be
> error-prone and difficult to debug when missed.
> 
> This patch adds an automatic CPU usage based detection. If a
> concurrency-managed work item consumes more CPU time than the threshold
> (10ms by default) continuously without intervening sleeps, wq_worker_tick()
> which is called from scheduler_tick() will detect the condition and
> automatically mark it CPU_INTENSIVE.
> 
> The mechanism isn't foolproof:
> 
> * Detection depends on tick hitting the work item. Getting preempted at the
>   right timings may allow a violating work item to evade detection at least
>   temporarily.

Right, if you have active tick avoidance in your work items you've got
bigger problems :-)

> * nohz_full CPUs may not be running ticks and thus can fail detection.

We do have sched_tick_remote() for the NOHZ_FULL case; it's all a big
can of tricky but it might just do -- if you really care ofc.

> * Even when detection is working, the 10ms detection delays can add up if
>   many CPU-hogging work items are queued at the same time.

HZ=100 assumption there :-) My HZs are bigger 'n yours etc.

> However, in vast majority of cases, this should be able to detect violations
> reliably and provide reasonable protection with a small increase in code
> complexity.
> 
> If some work items trigger this condition repeatedly, the bigger problem
> likely is the CPU being saturated with such per-cpu work items and the
> solution would be making them UNBOUND. The next patch will add a debug
> mechanism to help spot such cases.
> 
> v3: Switch to use wq_worker_tick() instead of hooking into preemptions as
>     suggested by Peter.
> 
> v2: Lai pointed out that wq_worker_stopping() also needs to be called from
>     preemption and rtlock paths and an earlier patch was updated
>     accordingly. This patch adds a comment describing the risk of infinte
>     recursions and how they're avoided.

I tend to prefer these changelog-changelogs to go below the --- so that
they go away on applying, they're not really relevant when reading the
patch in a year's time when trying to figure out wtf this patch did.

> Signed-off-by: Tejun Heo <tj@...nel.org>
> Cc: Peter Zijlstra <peterz@...radead.org>
> Cc: Linus Torvalds <torvalds@...ux-foundation.org>
> Cc: Lai Jiangshan <jiangshanlai@...il.com>
> ---

Anyway, this seems entirely reasonable.

Acked-by: Peter Zijlstra (Intel) <peterz@...radead.org>


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ