[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+1xoqcg3of9QSJRcvtr7TvzuS+7_4oFQNd0WS1MYYgc-Fu2Gg@mail.gmail.com>
Date: Thu, 5 Apr 2012 10:25:39 +0200
From: Sasha Levin <levinsasha928@...il.com>
To: Milton Miller <miltonm@....com>
Cc: Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...e.hu>, "H. Peter Anvin" <hpa@...or.com>,
Avi Kivity <avi@...hat.com>, Dave Jones <davej@...hat.com>,
kvm@...r.kernel.org,
"linux-kernel@...r.kernel.org List" <linux-kernel@...r.kernel.org>
Subject: Re: CPU softlockup due to smp_call_function()
On Thu, Apr 5, 2012 at 6:06 AM, Milton Miller <miltonm@....com> wrote:
>
> On Wed, 4 Apr 2012 about 22:12:36 +0200, Sasha Levin wrote:
>> I've starting seeing soft lockups resulting from smp_call_function()
>> calls. I've attached two different backtraces of this happening with
>> different code paths.
>>
>> This is running inside a KVM guest with the trinity fuzzer, using
>> today's linux-next kernel.
>
> Hi Sasha.
>
> You have two different call sites (arch/x86/mm/pageattr.c
> cpa_flush_range and net/core/dev.c netdev_run_todo), and both use
> call on_each_cpu with wait=1.
>
> I tried a few options but can't get close enough to your compiled
> length of 2a0 to know if the code is spinning on the first
> csd_lock_wait in csd_lock or in the second csd_lock_wait after the
> call to arch_send_call_function_ipi_mask (aka smp_ops + 0x44 in my
> x86_64 compile). Please check your disassembly and report.
Hi Milton,
# addr2line -i -e /usr/src/linux/vmlinux ffffffff8111f30e
/usr/src/linux/kernel/smp.c:102
/usr/src/linux/kernel/smp.c:554
So It's the 2nd lock.
I'll work on adding the debug code mentioned in your mail.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists