lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <9d380772-6287-b75d-2d4d-e1c9a69ea981@redhat.com>
Date: Fri, 6 Sep 2024 13:23:57 +0200 (CEST)
From: Mikulas Patocka <mpatocka@...hat.com>
To: Tejun Heo <tj@...nel.org>
cc: Mike Snitzer <snitzer@...nel.org>, Eric Biggers <ebiggers@...nel.org>, 
    dm-devel@...ts.linux.dev, Alasdair Kergon <agk@...hat.com>, 
    Lai Jiangshan <jiangshanlai@...il.com>, linux-kernel@...r.kernel.org, 
    linux-mm@...ck.org, Sami Tolvanen <samitolvanen@...gle.com>, 
    linux-nfs@...r.kernel.org
Subject: Re: sharing rescuer threads when WQ_MEM_RECLAIM needed? [was: Re:
 dm verity: don't use WQ_MEM_RECLAIM]



On Thu, 5 Sep 2024, Tejun Heo wrote:

> Hello,
> 
> On Thu, Sep 05, 2024 at 07:35:41PM -0400, Mike Snitzer wrote:
> ...
> > > I wonder if there's any way to safely share the rescuer threads.
> > 
> > Oh, I like that idea, yes please! (would be surprised if it exists,
> > but I love being surprised!).  Like Mikulas pointed out, we have had
> > to deal with fundamental deadlocks due to resource sharing in DM.
> > Hence the need for guaranteed forward progress that only
> > WQ_MEM_RECLAIM can provide.

I remember that one of the first thing that I did when I started at Red 
Hat was to remove shared resources from device mapper :) There were shared 
mempools and shared kernel threads.

You can see this piece of code in mm/mempool.c that was a workaround for 
shared mempool bugs:
        /*
         * FIXME: this should be io_schedule().  The timeout is there as a
         * workaround for some DM problems in 2.6.18.
         */
        io_schedule_timeout(5*HZ);

> The most straightforward way to do this would be simply sharing the
> workqueue across the entities that wanna be in the same forward progress
> guarantee domain. It shouldn't be that difficult to make workqueues share a
> rescuer either but may be a bit of an overkill.
> 
> Taking a step back tho, how would you determine which ones can share a
> rescuer? Things which stack on top of each other can't share the rescuer cuz
> higher layer occupying the rescuer and stall lower layers and thus deadlock.
> The rescuers can be shared across independent stacks of dm devices but that
> sounds like that will probably involve some graph walking. Also, is this a
> real problem?
> 
> Thanks.

It would be nice if we could know dependencies of every Linux driver. But 
we are not quite there. We know the dependencies inside device mapper, but 
when you use some non-dm device (like md, loop), we don't have a dependecy 
graph for that.

Mikulas


Powered by blists - more mailing lists