lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZtpcI-Qv_Q6g0Q6Z@slm.duckdns.org>
Date: Thu, 5 Sep 2024 15:34:27 -1000
From: Tejun Heo <tj@...nel.org>
To: Mike Snitzer <snitzer@...nel.org>
Cc: Eric Biggers <ebiggers@...nel.org>,
	Mikulas Patocka <mpatocka@...hat.com>, dm-devel@...ts.linux.dev,
	Alasdair Kergon <agk@...hat.com>,
	Lai Jiangshan <jiangshanlai@...il.com>,
	linux-kernel@...r.kernel.org, linux-mm@...ck.org,
	Sami Tolvanen <samitolvanen@...gle.com>, linux-nfs@...r.kernel.org
Subject: Re: sharing rescuer threads when WQ_MEM_RECLAIM needed? [was: Re: dm
 verity: don't use WQ_MEM_RECLAIM]

Hello,

On Thu, Sep 05, 2024 at 07:35:41PM -0400, Mike Snitzer wrote:
...
> > I wonder if there's any way to safely share the rescuer threads.
> 
> Oh, I like that idea, yes please! (would be surprised if it exists,
> but I love being surprised!).  Like Mikulas pointed out, we have had
> to deal with fundamental deadlocks due to resource sharing in DM.
> Hence the need for guaranteed forward progress that only
> WQ_MEM_RECLAIM can provide.

The most straightforward way to do this would be simply sharing the
workqueue across the entities that wanna be in the same forward progress
guarantee domain. It shouldn't be that difficult to make workqueues share a
rescuer either but may be a bit of an overkill.

Taking a step back tho, how would you determine which ones can share a
rescuer? Things which stack on top of each other can't share the rescuer cuz
higher layer occupying the rescuer and stall lower layers and thus deadlock.
The rescuers can be shared across independent stacks of dm devices but that
sounds like that will probably involve some graph walking. Also, is this a
real problem?

Thanks.

-- 
tejun

Powered by blists - more mailing lists