[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CANGgnMb_eqipXLhJfvDjWywG2xxFSiEPySiuYz=mvU94h0P6sw@mail.gmail.com>
Date: Wed, 25 Jun 2014 10:04:13 -0700
From: Austin Schuh <austin@...oton-tech.com>
To: Tejun Heo <tj@...nel.org>
Cc: Dave Chinner <david@...morbit.com>, xfs <xfs@....sgi.com>,
Thomas Gleixner <tglx@...utronix.de>,
linux-kernel@...r.kernel.org
Subject: Re: On-stack work item completion race? (was Re: XFS crash?)
On Wed, Jun 25, 2014 at 7:00 AM, Tejun Heo <tj@...nel.org> wrote:
>
> Hello,
>
> On Tue, Jun 24, 2014 at 08:05:07PM -0700, Austin Schuh wrote:
> > > I can see no reason why manual completion would behave differently
> > > from flush_work() in this case.
> >
> > I went looking for a short trace in my original log to show the problem,
> > and instead found evidence of the second problem. I still like the shorter
> > flush_work call, but that's not my call.
>
> So, are you saying that the original issue you reported isn't actually
> a problem? But didn't you imply that changing the waiting mechanism
> fixed a deadlock or was that a false positive?
Correct, that was a false positive. Sorry for the noise.
> > I spent some more time debugging, and I am seeing that tsk_is_pi_blocked is
> > returning 1 in sched_submit_work (kernel/sched/core.c). It looks
> > like sched_submit_work is not detecting that the worker task is blocked on
> > a mutex.
>
> The function unplugs the block layer and doesn't have much to do with
> workqueue although it has "_work" in its name.
Thomas moved
+ if (tsk->flags & PF_WQ_WORKER)
+ wq_worker_sleeping(tsk);
into sched_submit_work as part of the RT patchset.
> > This looks very RT related right now. I see 2 problems from my reading
> > (and experimentation). The first is that the second worker isn't getting
> > started because tsk_is_pi_blocked is reporting that the task isn't blocked
> > on a mutex. The second is that even if another worker needs to be
> > scheduled because the original worker is blocked on a mutex, we need the
> > pool lock to schedule another worker. The pool lock can be acquired by any
> > CPU, and is a spin_lock. If we end up on the slow path for the pool lock,
> > we hit BUG_ON(rt_mutex_real_waiter(task->pi_blocked_on))
> > in task_blocks_on_rt_mutex in rtmutex.c. I'm not sure how to deal with
> > either problem.
> >
> > Hopefully I've got all my facts right... Debugging kernel code is a whole
> > new world from userspace code.
>
> I don't have much idea how RT kernel works either. Can you reproduce
> the issues that you see on mainline?
>
> Thanks.
>
> --
> tejun
I'll see what I can do.
Thanks!
Austin
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists