[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150403054512.GA11041@gmail.com>
Date: Fri, 3 Apr 2015 07:45:13 +0200
From: Ingo Molnar <mingo@...nel.org>
To: Chris J Arges <chris.j.arges@...onical.com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
Rafael David Tinoco <inaddy@...ntu.com>,
Peter Anvin <hpa@...or.com>,
Jiang Liu <jiang.liu@...ux.intel.com>,
Peter Zijlstra <peterz@...radead.org>,
LKML <linux-kernel@...r.kernel.org>,
Jens Axboe <axboe@...nel.dk>,
Frederic Weisbecker <fweisbec@...il.com>,
Gema Gomez <gema.gomez-solano@...onical.com>,
the arch/x86 maintainers <x86@...nel.org>
Subject: Re: smp_call_function_single lockups
* Chris J Arges <chris.j.arges@...onical.com> wrote:
>
>
> On 04/02/2015 02:07 PM, Ingo Molnar wrote:
> >
> > * Chris J Arges <chris.j.arges@...onical.com> wrote:
> >
> >> Whenever we look through the crashdump we see csd_lock_wait waiting
> >> for CSD_FLAG_LOCK bit to be cleared. Usually the signature leading
> >> up to that looks like the following (in the openstack tempest on
> >> openstack and nested VM stress case)
> >>
> >> (qemu-system-x86 task)
> >> kvm_sched_in
> >> -> kvm_arch_vcpu_load
> >> -> vmx_vcpu_load
> >> -> loaded_vmcs_clear
> >> -> smp_call_function_single
> >>
> >> (ksmd task)
> >> pmdp_clear_flush
> >> -> flush_tlb_mm_range
> >> -> native_flush_tlb_others
> >> -> smp_call_function_many
> >
> > So is this two separate smp_call_function instances, crossing each
> > other, and none makes any progress, indefinitely - as if the two IPIs
> > got lost?
> >
>
> This is two different crash signatures. Sorry for the confusion.
So just in case, both crash signatures ought to be detected by the
patch I just sent.
Thanks,
Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists