[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZbVNeVkGItt1KTan@slm.duckdns.org>
Date: Sat, 27 Jan 2024 08:37:45 -1000
From: Tejun Heo <tj@...nel.org>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Mikulas Patocka <mpatocka@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>, linux-kernel@...r.kernel.org,
dm-devel@...ts.linux.dev, Mike Snitzer <msnitzer@...hat.com>,
Ignat Korchagin <ignat@...udflare.com>,
Damien Le Moal <damien.lemoal@....com>,
Bob Liu <bob.liu@...cle.com>, Hou Tao <houtao1@...wei.com>,
Nathan Huckleberry <nhuck@...gle.com>,
Peter Zijlstra <peterz@...radead.org>, Ingo Molnar <mingo@...e.hu>
Subject: Re: [PATCH] softirq: fix memory corruption when freeing
tasklet_struct
On Fri, Jan 26, 2024 at 01:43:25PM -1000, Tejun Heo wrote:
> Hello,
>
> The following is a draft patch which implements atomic workqueues and
> convert dm-crypt to use it instead of tasklet. It's an early draft and very
> lightly tested but seems to work more or less. It's on top of wq/for6.9 + a
> pending patchset. The following git branch can be used for testing.
>
> git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq.git wq-atomic-draft
>
> I'll go over it to make sure all the pieces work. While it adds some
> complications, it doesn't seem too bad and conversion from tasklet should be
> straightforward too.
>
> - It hooks into tasklet[_hi] for now but if we get to update all of tasklet
> users, we can just repurpose the tasklet softirq slots directly.
>
> - I thought about allowing busy-waits for flushes and cancels but it didn't
> seem necessary. Keeping them blocking has the benefit of avoiding possible
> nasty deadlocks. We can revisit if there's need.
>
> - Compared to tasklet, each work item goes through a bit more management
> code because I wanted to keep the code as unified as possible to regular
> threaded workqueues. That said, it's not a huge amount and my bet is that
> the difference is unlikely to be noticeable.
Should have known when it worked too well on the first try but I missed a
part in init and this was just running them on per-cpu workqueues. Will post
an actually working version later.
Thanks.
--
tejun
Powered by blists - more mailing lists