lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140624032521.GA12164@htj.dyndns.org>
Date:	Mon, 23 Jun 2014 23:25:21 -0400
From:	Tejun Heo <tj@...nel.org>
To:	Dave Chinner <david@...morbit.com>
Cc:	Austin Schuh <austin@...oton-tech.com>, xfs <xfs@....sgi.com>,
	linux-kernel@...r.kernel.org
Subject: Re: On-stack work item completion race? (was Re: XFS crash?)

Hello,

On Tue, Jun 24, 2014 at 01:02:40PM +1000, Dave Chinner wrote:
> start_flush_work() is effectively a special queue_work()
> implementation, so if if it's not safe to call complete() from the
> workqueue as the above patch implies then this code has the same
> problem.
> 
> Tejun - is this "do it yourself completion" a known issue w.r.t.
> workqueues? I can't find any documentation that says "don't do
> that" so...?

It's more complex than using flush_work() but there's nothing
fundamentally wrong with it.  A work item is completely unlinked
before its execution starts.  It's safe to free the work item once its
work function started, whether through kfree() or returning.

One difference between flush_work() and manual completion would be
that if the work item gets requeued, flush_work() would wait for the
queued one to finish but given the work item is one-shot this doesn't
make any difference.

I can see no reason why manual completion would behave differently
from flush_work() in this case.

> As I understand it, what then happens is that the workqueue code
> grabs another kworker thread and runs the next work item in it's
> queue. IOWs, work items can block, but doing that does not prevent
> execution of other work items queued on other work queues or even on
> the same work queue. Tejun, did I get that correct?

Yes, as long as the workqueue is under its @max_active limit and has
access to an existing kworker or can create a new one, it'll start
executing the next work item immediately; however, the guaranteed
level of concurrency is 1 even for WQ_RECLAIM workqueues.  IOW, the
work items queued on a workqueue must be able to make forward progress
with single work item if the work items are being depended upon for
memory reclaim.

> Hence the work on the xfs-data queue will block until another
> kworker processes the item on the xfs-alloc-wq which means progress
> is made and the inode gets unlocked. Then the kworker for the work
> on the xfs-data queue will get the lock, complete it's work and
> everything has resolved itself.

As long as a WQ_RECLAIM workqueue dosen't depend upon itself,
forward-progress is guaranteed.

Thanks.

-- 
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ