[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <551C6A48.9060805@canonical.com>
Date: Wed, 01 Apr 2015 16:59:36 -0500
From: Chris J Arges <chris.j.arges@...onical.com>
To: Linus Torvalds <torvalds@...ux-foundation.org>
CC: Ingo Molnar <mingo@...nel.org>,
Rafael David Tinoco <inaddy@...ntu.com>,
Peter Anvin <hpa@...or.com>,
Jiang Liu <jiang.liu@...ux.intel.com>,
Peter Zijlstra <peterz@...radead.org>,
LKML <linux-kernel@...r.kernel.org>,
Jens Axboe <axboe@...nel.dk>,
Frederic Weisbecker <fweisbec@...il.com>,
Gema Gomez <gema.gomez-solano@...onical.com>,
the arch/x86 maintainers <x86@...nel.org>
Subject: Re: smp_call_function_single lockups
On 04/01/2015 11:14 AM, Linus Torvalds wrote:
> On Wed, Apr 1, 2015 at 9:10 AM, Chris J Arges
> <chris.j.arges@...onical.com> wrote:
>>
>> Even with irqbalance removed from the L0/L1 machines the hang still occurs.
>>
>> This results in no 'apic: vector* or 'ack_APIC*' messages being displayed.
>
> Ok. So the ack_APIC debug patch found *something*, but it seems to be
> unrelated to the hang.
>
> Dang. Oh well. Back to square one.
>
> Linus
>
With my L0 testing I've normally used a 3.13 series kernel since it
tends to reproduce the hang very quickly with the testcase. Note we have
reproduced an identical hang with newer kernels (3.19+patches) using the
openstack tempest on openstack reproducer, but the timing can vary
between hours and days.
Installing a v4.0-rc6+patch kernel on L0 makes the problem very slow to
reproduce, so I am running these tests now which may take day(s).
It is worthwhile to do a 'bisect' to see where on average it takes
longer to reproduce? Perhaps it will point to a relevant change, or it
may be completely useless.
--chris
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists