[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140320165703.522c7c5c@gandalf.local.home>
Date: Thu, 20 Mar 2014 16:57:03 -0400
From: Steven Rostedt <rostedt@...dmis.org>
To: Jeffrey Layton <jlayton@...hat.com>
Cc: LKML <linux-kernel@...r.kernel.org>, linux-cifs@...r.kernel.org,
Steve French <sfrench@...ba.org>,
Peter Zijlstra <peterz@...radead.org>,
Clark Williams <williams@...hat.com>,
"Luis Claudio R. Goncalves" <lclaudio@...g.org>,
Thomas Gleixner <tglx@...utronix.de>,
Tejun Heo <tj@...nel.org>, uobergfe@...hat.com,
Pavel Shilovsky <piastryyy@...il.com>
Subject: Re: [RFC PATCH] cifs: Fix possible deadlock with cifs and work
queues
On Thu, 20 Mar 2014 15:28:33 -0400
Jeffrey Layton <jlayton@...hat.com> wrote:
> Nice analysis! I think eventually we'll need to overhaul this code not
Note, Ulrich Obergfell helped a bit in the initial analysis. He found
from a customer core dump that the kworker thread was blocked on the
cinode->lock_sem, and the reader was blocked as well. That was enough
for me to find where the problem laid.
> to use rw semaphores, but that's going to take some redesign. (Wonder
> if we could change it to use seqlocks or something?)
>
> Out of curiousity, does this eventually time out and unwedge itself?
> Usually when the server doesn't get a response to an oplock break in
> around a minute or so it gives up and allows the thing that caused the
> oplock break to proceed anyway. Not great for performance but it out to
> eventually make progress due to that.
No, I believe it's hard locked. Nothing is going to wake up the oplock
break if it is blocked on a down_read(). Only the release of the rwsem
will do that. It's the subtle way the kworker threads are done.
>
> In any case, this looks like a reasonable fix for now, but I suspect you
> can hit similar problems in the write codepath too. What may be best is
> turn this around and queue the oplock break to the new workqueue
> instead of the read completion job.
Or perhaps give both the read and write their own workqueues? We have
to look at all the work queue handlers, and be careful about any users
that take the lock_sem, and separate them out.
-- Steve
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists