lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140319202839.GA3656@mtj.dyndns.org>
Date:	Wed, 19 Mar 2014 16:28:39 -0400
From:	Tejun Heo <tj@...nel.org>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	Steven Rostedt <rostedt@...dmis.org>,
	LKML <linux-kernel@...r.kernel.org>, linux-cifs@...r.kernel.org,
	Steve French <sfrench@...ba.org>,
	Clark Williams <williams@...hat.com>,
	"Luis Claudio R. Goncalves" <lclaudio@...g.org>,
	Thomas Gleixner <tglx@...utronix.de>, uobergfe@...hat.com
Subject: Re: [RFC PATCH] cifs: Fix possible deadlock with cifs and work queues

Hello, Steven, Peter.

On Wed, Mar 19, 2014 at 08:34:07PM +0100, Peter Zijlstra wrote:
> The way I understand workqueues is that we cannot guarantee concurrency
> like this. It tries, but there's no guarantee.

So, the guarantee is that if a workqueue has WQ_MEM_RECLAIM, it'll
always have at least one worker thread working on it, so workqueues
which may be depended upon during memory reclaim should have the flag
set and must not require more than single level of concurrency to make
forward progress.  Workqueues w/o memory reclaim set depend on the
fact that eventually memory will be reclaimed and enough number of
workers necessary to make forward progress will be made available.

> WQ_MAX_ACTIVE seems to be a hard upper limit of concurrent workers. So
> given 511 other blocked works, the described problem will always happen.

That actually is per-workqueue limit and workqueue core will try to
create as many workers as possible to satisfy the demanded
concurrency.  ie. having two workqueues with the same max_active means
that the total number of workers may reach 2 * max_active; however,
this is no guarantee.  If the system is under memory pressure and the
workqueues don't have MEM_RECLAIM set, they may not get any
concurrency until more memory is made available.

> Creating another workqueue doesn't actually create more threads.

It looks like the issue Steven is describing is caused by having a
dependency chain longer than 1 through rwsem in a MEM_RECLAIM
workqueue.  Moving the write work items to a separate workqueue breaks
the r-w-r chain and ensures that forward progress can be made with
single level of concurrency on both workqueues, so, yeah, it looks
like the correct fix to me.  It it scarily subtle tho and quite likely
to present in other code paths too. :(

Thanks.

-- 
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ