lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date: Mon, 10 Jun 2024 10:29:06 -1000
From: Tejun Heo <tj@...nel.org>
To: Tim Van Patten <timvp@...omium.org>
Cc: LKML <linux-kernel@...r.kernel.org>, druth@...omium.org,
	Tim Van Patten <timvp@...gle.com>,
	Lai Jiangshan <jiangshanlai@...il.com>
Subject: Re: [PATCH] workqueue: Prevent delayed work UAF kernel panic

Hello,

On Fri, Jun 07, 2024 at 11:33:43AM -0600, Tim Van Patten wrote:
> > Nothing guarantees that they'd stay NULL after wq destruction, right?
> 
> It doesn't appear it's possible to re-use a wq once it's been
> destroyed, so I don't

Hmm... how not? It can be freed and then reallocated for whatever, right?

> think they can be re-initialized once they're NULL (nor the __WQ_DESTROYING
> flag cleared). I could certainly be wrong here though.
> 
> > > Discarding all work once __WQ_DESTROYING has been set (including from
> > > the same workqueue) causes breakage, so we must check the pointers
> > > directly.
> >
> > There's only so much protection we can offer for buggy code and I'd much
> > prefer an approach where the overhead is in the destruction path rather than
> > the queueing path.
> 
> That's a good point about avoiding the overhead I hadn't given enough
> consideration.
> 
> A fix in the destruction path would presumably require draining the work
> in drain_workqueue() or discarding it in destroy_workqueue(). My naive
> interpretation of things would be to discard it, so the work isn't
> executed preemptively, but I don't know what the expectations are for
> delayed work regarding which is better: do it early or don't do it at all.
> As you've pointed out, since this is buggy code to begin with, I don't
> think there's any contract that needs to be adhered to super-closely,
> which is why I'm leaning towards the discard path.

Probably the safest thing to do would be letting destroy_workqueue() skip
destroying if it detects that the workqueue can't be drained correctly.
Obviously, there's no way to protect against queueing after destruction is
complete tho.

> Regardless, it doesn't appear we have a list of those delayed work items
> available today. Adding one should be possible, but that would include
> removing the work in delayed_work_timer_fn() in the normal path, which
> would also add overhead (and likely more than the pointer checks, due to
> searching the list, etc.).
> 
> I think a better approach here would be to update delayed_work_timer_fn()
> to check __WQ_DESTROYING and discard all attempts to queue to a destroyed
> wq. I haven't given this as much testing (I just kicked off a round of
> testing), but it should help reduce the overhead impact.

I don't know. Isn't that really partial? The workqueue struct may have been
freed and reallocated for something else long ago. What does checking a flag
on the struct mean?

If you really want to add protection against delayed queueing, one possible
way is adding per-CPU counters which track the number of work items which
are being delayed and check that from destroy path and error out there.
However, ultimately, there's only so much we can do about use-after-free
bugs. It's C after all.

Thanks.

-- 
tejun

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ