[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140319194729.GB11257@laptop.programming.kicks-ass.net>
Date: Wed, 19 Mar 2014 20:47:29 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Steven Rostedt <rostedt@...dmis.org>
Cc: LKML <linux-kernel@...r.kernel.org>, linux-cifs@...r.kernel.org,
Steve French <sfrench@...ba.org>,
Clark Williams <williams@...hat.com>,
"Luis Claudio R. Goncalves" <lclaudio@...g.org>,
Thomas Gleixner <tglx@...utronix.de>,
Tejun Heo <tj@...nel.org>, uobergfe@...hat.com
Subject: Re: [RFC PATCH] cifs: Fix possible deadlock with cifs and work queues
On Wed, Mar 19, 2014 at 03:43:39PM -0400, Steven Rostedt wrote:
> On Wed, 19 Mar 2014 20:34:07 +0100
> Peter Zijlstra <peterz@...radead.org> wrote:
>
> > On Wed, Mar 19, 2014 at 03:12:52PM -0400, Steven Rostedt wrote:
> > > My question to Tejun is, if we create another workqueue, to add the
> > > rdata->work to, would that prevent the above problem? Or what other
> > > fixes can we do?
> >
> > The way I understand workqueues is that we cannot guarantee concurrency
> > like this. It tries, but there's no guarantee.
> >
> > WQ_MAX_ACTIVE seems to be a hard upper limit of concurrent workers. So
> > given 511 other blocked works, the described problem will always happen.
> >
> > Creating another workqueue doesn't actually create more threads.
>
> But I noticed this:
>
> Before patch:
>
> # ps aux |grep cifs
> root 3119 0.0 0.0 0 0 ? S< 14:17 0:00 [cifsiod]
>
> After patch:
>
> # ps aux |grep cifs
> root 1109 0.0 0.0 0 0 ? S< 15:11 0:00 [cifsiod]
> root 1111 0.0 0.0 0 0 ? S< 15:11 0:00 [cifsiord]
>
> It looks to me that it does create new threads.
Ah, I think that's because of the MEM_RECLAIM, not sure if that will
normally participate in running works. Its been a long time since I
looked at any of that.
Lets wait for TJ to wake up.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists