[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150407205945.GA28212@canonical.com>
Date: Tue, 7 Apr 2015 15:59:46 -0500
From: Chris J Arges <chris.j.arges@...onical.com>
To: Ingo Molnar <mingo@...nel.org>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
Rafael David Tinoco <inaddy@...ntu.com>,
Peter Anvin <hpa@...or.com>,
Jiang Liu <jiang.liu@...ux.intel.com>,
Peter Zijlstra <peterz@...radead.org>,
LKML <linux-kernel@...r.kernel.org>,
Jens Axboe <axboe@...nel.dk>,
Frederic Weisbecker <fweisbec@...il.com>,
Gema Gomez <gema.gomez-solano@...onical.com>,
the arch/x86 maintainers <x86@...nel.org>
Subject: Re: [PATCH] smp/call: Detect stuck CSD locks
On Tue, Apr 07, 2015 at 11:21:21AM +0200, Ingo Molnar wrote:
>
> * Linus Torvalds <torvalds@...ux-foundation.org> wrote:
>
> > On Mon, Apr 6, 2015 at 9:58 AM, Chris J Arges
> > <chris.j.arges@...onical.com> wrote:
> > >
> > > I noticed that with this patch it never reached 'csd: Detected
> > > non-responsive CSD lock...' because it seems that ts_delta never reached
> > > CSD_LOCK_TIMEOUT. I tried adjusting the TIMEOUT value and still got a
> > > hang without reaching this statement. I made the ts0,ts1 values global
> > > and put a counter into the while loop and found that the loop iterated
> > > about 670 million times before the softlockup was detected. In addition
> > > ts0 and ts1 both had the same values upon soft lockup, and thus would
> > > never trip the CSD_LOCK_TIMEOUT check.
> >
> > Sounds like jiffies stops updating. Which doesn't sound unreasonable
> > for when there is some IPI problem.
>
> Yeah - although it weakens the 'IPI lost spuriously' hypothesis: we
> ought to have irqs enabled here which normally doesn't stop jiffies
> from updating, and the timer interrupt stopping suggests a much deeper
> problem than just some lost IPI ...
>
> >
> > How about just changing the debug patch to count iterations, and
> > print out a warning when it reaches ten million or so.
>
> Yeah, or replace jiffies_to_ms() with:
>
> sched_clock()/1000000
>
> sched_clock() should be safe to call in these codepaths.
>
> Like the attached patch. (Totally untested.)
>
Ingo,
Looks like sched_lock() works in this case.
Adding the dump_stack() line caused various issues such as the VM oopsing on
boot or the softlockup never being detected properly (and thus not crashing).
So the below is running with your patch and 'dump_stack()' commented out.
Here is the log leading up to the soft lockup (I adjusted CSD_LOCK_TIMEOUT to 5s):
[ 22.669630] kvm [1523]: vcpu0 disabled perfctr wrmsr: 0xc1 data 0xffff
[ 38.712710] csd: Detected non-responsive CSD lock (#1) on CPU#00, waiting 5.000 secs for CPU#01
[ 38.712715] csd: Re-sending CSD lock (#1) IPI from CPU#00 to CPU#01
[ 43.712709] csd: Detected non-responsive CSD lock (#2) on CPU#00, waiting 5.000 secs for CPU#01
[ 43.712713] csd: Re-sending CSD lock (#2) IPI from CPU#00 to CPU#01
[ 48.712708] csd: Detected non-responsive CSD lock (#3) on CPU#00, waiting 5.000 secs for CPU#01
[ 48.712732] csd: Re-sending CSD lock (#3) IPI from CPU#00 to CPU#01
[ 53.712708] csd: Detected non-responsive CSD lock (#4) on CPU#00, waiting 5.000 secs for CPU#01
[ 53.712712] csd: Re-sending CSD lock (#4) IPI from CPU#00 to CPU#01
[ 58.712707] csd: Detected non-responsive CSD lock (#5) on CPU#00, waiting 5.000 secs for CPU#01
[ 58.712712] csd: Re-sending CSD lock (#5) IPI from CPU#00 to CPU#01
[ 60.080005] NMI watchdog: BUG: soft lockup - CPU#0 stuck for 22s! [ksmd:26]
Still we never seem to release the lock, even when resending the IPI.
Looking at the call_single_queue I see the following (I crashed during the soft lockup):
crash> p call_single_queue
PER-CPU DATA TYPE:
struct llist_head call_single_queue;
PER-CPU ADDRESSES:
[0]: ffff88013fc16580
[1]: ffff88013fd16580
crash> list -s call_single_data ffff88013fc16580
ffff88013fc16580
struct call_single_data {
llist = {
next = 0x0
},
func = 0x0,
info = 0x0,
flags = 0
}
crash> list -s call_single_data ffff88013fd16580
ffff88013fd16580
struct call_single_data {
llist = {
next = 0xffff88013a517c08
},
func = 0x0,
info = 0x0,
flags = 0
}
ffff88013a517c08
struct call_single_data {
llist = {
next = 0x0
},
func = 0xffffffff81067f30 <flush_tlb_func>,
info = 0xffff88013a517d00,
flags = 3
}
This seems consistent with previous crash dumps.
As I mentioned here: https://lkml.org/lkml/2015/4/6/186
I'm able to reproduce this easily on certain hardware w/
b6b8a1451fc40412c57d10c94b62e22acab28f94 applied and not
9242b5b60df8b13b469bc6b7be08ff6ebb551ad3 on the L0 kernel. I think it makes
sense to get as clear of a picture with this more trivial reproducer, then
re-run this on a L0 w/ v4.0-rcX. Most likely the later case will take many days
to reproduce.
Thanks,
--chris
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists