lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Thu, 3 Jan 2013 23:11:33 +0000
From:	"Myklebust, Trond" <Trond.Myklebust@...app.com>
To:	Tejun Heo <tj@...nel.org>
CC:	"J. Bruce Fields" <bfields@...ldses.org>,
	"Adamson, Dros" <Weston.Adamson@...app.com>,
	Dave Jones <davej@...hat.com>,
	Linux Kernel <linux-kernel@...r.kernel.org>,
	"linux-nfs@...r.kernel.org" <linux-nfs@...r.kernel.org>
Subject: Re: nfsd oops on Linus' current tree.

On Thu, 2013-01-03 at 17:26 -0500, Tejun Heo wrote:
> Hello, Trond.
> 
> On Thu, Jan 03, 2013 at 10:12:32PM +0000, Myklebust, Trond wrote:
> > > The analysis is likely completely wrong, so please don't go off doing
> > > something unnecessary.  Please take look at what's causing the
> > > deadlocks again.
> > 
> > The analysis is a no-brainer:
> > We see a deadlock due to one work item waiting for completion of another
> > work item that is queued on the same CPU. There is no other dependency
> > between the two work items.
> 
> What do you mean "waiting for completion of"?  Is one flushing the
> other?  Or is one waiting for the other to take certain action?  How
> are the two work items related?  Are they queued on the same
> workqueue?  Can you create a minimal repro case of the observed
> deadlock?

The two work items are running on different work queues. One is running
on the nfsiod work queue, and is waiting for the other to complete on
the rpciod work queue. Basically, the nfsiod work item is trying to shut
down an RPC session, and it is waiting for each outstanding RPC call to
finish running a clean-up routine.

We can't call flush_work(), since we don't have a way to pin the
work_struct for any long period of time, so we queue all the RPC calls
up, then sleep on a global wait queue for 1 second or until the last RPC
call wakes us up (see rpc_shutdown_client()).

In the deadlock scenario, it looks as if one (or more) of the RPC calls
are getting queued on the same CPU (but on the rpciod workqueue) as the
shutdown process (running on nfsiod).


> Ooh, BTW, there was a bug where workqueue code created a false
> dependency between two work items.  Workqueue currently considers two
> work items to be the same if they're on the same address and won't
> execute them concurrently - ie. it makes a work item which is queued
> again while being executed wait for the previous execution to
> complete.
> 
> If a work function frees the work item, and then waits for an event
> which should be performed by another work item and *that* work item
> recycles the freed work item, it can create a false dependency loop.
> There really is no reliable way to detect this short of verifying
> every memory free.  A patch is queued to make such occurrences less
> likely (work functions should also match for two work items considered
> the same), but if you're seeing this, the best thing to do is freeing
> the work item at the end of the work function.

That's interesting... I wonder if we may have been hitting that issue.

>>From what I can see, we do actually free the write RPC task (and hence
the work_struct) before we call the asynchronous unlink completion...

Dros, can you see if reverting commit
324d003b0cd82151adbaecefef57b73f7959a469 + commit 
168e4b39d1afb79a7e3ea6c3bb246b4c82c6bdb9 and then applying the attached
patch also fixes the hang on a pristine 3.7.x kernel?

-- 
Trond Myklebust
Linux NFS client maintainer

NetApp
Trond.Myklebust@...app.com
www.netapp.com

View attachment "gnurr.dif" of type "text/x-patch" (1283 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ