[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aJotnxPj_OXkrc42@slm.duckdns.org>
Date: Mon, 11 Aug 2025 07:51:27 -1000
From: Tejun Heo <tj@...nel.org>
To: Christoph Hellwig <hch@....de>
Cc: Andrey Albershteyn <aalbersh@...hat.com>, fsverity@...ts.linux.dev,
linux-fsdevel@...r.kernel.org, linux-xfs@...r.kernel.org,
david@...morbit.com, djwong@...nel.org, ebiggers@...nel.org,
Lai Jiangshan <jiangshanlai@...il.com>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH RFC 04/29] fsverity: add per-sb workqueue for post read
processing
Hello,
On Mon, Aug 11, 2025 at 01:45:19PM +0200, Christoph Hellwig wrote:
> On Mon, Jul 28, 2025 at 10:30:08PM +0200, Andrey Albershteyn wrote:
> > From: Andrey Albershteyn <aalbersh@...hat.com>
> >
> > For XFS, fsverity's global workqueue is not really suitable due to:
> >
> > 1. High priority workqueues are used within XFS to ensure that data
> > IO completion cannot stall processing of journal IO completions.
> > Hence using a WQ_HIGHPRI workqueue directly in the user data IO
> > path is a potential filesystem livelock/deadlock vector.
>
> Do they? I though the whole point of WQ_HIGHPRI was that they'd
> have separate rescue workers to avoid any global pool effects.
HIGHPRI and MEM_RECLAIM are orthogonal. HIGHPRI makes the workqueue use
worker pools with high priority, so all work items would execute at MIN_NICE
(-20). Hmm... actually, rescuer doesn't set priority according to the
workqueue's, which seems buggy.
Thanks.
--
tejun
Powered by blists - more mailing lists