[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160104155915.GI6344@twins.programming.kicks-ass.net>
Date: Mon, 4 Jan 2016 16:59:15 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Andy Lutomirski <luto@...capital.net>
Cc: Dominique Martinet <dominique.martinet@....fr>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...nel.org>,
Al Viro <viro@...iv.linux.org.uk>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
V9FS Developers <v9fs-developer@...ts.sourceforge.net>,
Linux FS Devel <linux-fsdevel@...r.kernel.org>
Subject: Re: [V9fs-developer] Hang triggered by udev coldplug, looks like a
race
On Tue, Dec 29, 2015 at 10:43:26PM -0800, Andy Lutomirski wrote:
> [add cc's]
>
> Hi scheduler people:
>
> This is relatively easy for me to reproduce. Any hints for debugging
> it? Could we really have a bug in which processes that are
> schedulable as a result of mutex unlock aren't always reliably
> scheduled?
I would expect that to cause wide-spread fail, then again, virt is known
to tickle timing issues that are improbable on actual hardware so
anything is possible.
Does it reproduce with DEBUG_MUTEXES set? (I'm not seeing a .config
here).
If its really easy you could start by tracing events/sched/sched_switch
events/sched/sched_wakeup, those would be the actual scheduling events.
Without DEBUG_MUTEXES there's the MUTEX_SPIN_ON_OWNER code that could
still confuse things, but that's mutex internal and not scheduler
related.
If it ends up being the SPIN_ON_OWNER bits we'll have to cook up some
extra debug patches.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists