lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110620121939.GI2082@n2100.arm.linux.org.uk>
Date:	Mon, 20 Jun 2011 13:19:39 +0100
From:	Russell King - ARM Linux <linux@....linux.org.uk>
To:	Santosh Shilimkar <santosh.shilimkar@...com>
Cc:	Peter Zijlstra <peterz@...radead.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	linux-omap@...r.kernel.org, linux-kernel@...r.kernel.org,
	linux-arm-kernel@...ts.infradead.org
Subject: Re: [RFC PATCH] ARM: smp: Fix the CPU hotplug race with scheduler.

On Mon, Jun 20, 2011 at 05:21:48PM +0530, Santosh Shilimkar wrote:
> On 6/20/2011 5:10 PM, Russell King - ARM Linux wrote:
>> On Mon, Jun 20, 2011 at 04:55:43PM +0530, Santosh Shilimkar wrote:
>>> On 6/20/2011 4:43 PM, Russell King - ARM Linux wrote:
>>>> On Mon, Jun 20, 2011 at 04:17:58PM +0530, Santosh Shilimkar wrote:
>>>>> Yes. It's because of interrupt and the CPU active-online
>>>>> race.
>>>>
>>>> I don't see that as a conclusion from this dump.
>>>>
>>>>> Here is the chash log..
>>>>> [   21.025451] CPU1: Booted secondary processor
>>>>> [   21.025451] CPU1: Unknown IPI message 0x1
>>>>> [   21.029113] Switched to NOHz mode on CPU #1
>>>>> [   21.029174] BUG: spinlock lockup on CPU#1, swapper/0, c06220c4
>>>>
>>>> That's the xtime seqlock.  We're trying to update the xtime from CPU1,
>>>> which is not yet online and not yet active.  That's fine, we're just
>>>> spinning on the spinlock here, waiting for the other CPUs to release
>>>> it.
>>>>
>>>> But what this is saying is that the other CPUs aren't releasing it.
>>>> The cpu hotplug code doesn't hold the seqlock either.  So who else is
>>>> holding this lock, causing CPU1 to time out on it.
>>>>
>>>> The other thing is that this is only supposed to trigger after about
>>>> one second:
>>>>
>>>>           u64 loops = loops_per_jiffy * HZ;
>>>>                   for (i = 0; i<   loops; i++) {
>>>>                           if (arch_spin_trylock(&lock->raw_lock))
>>>>                                   return;
>>>>                           __delay(1);
>>>>                   }
>>>>
>>>> which from the timings you have at the beginning of your printk lines
>>>> is clearly not the case - it's more like 61us.
>>>>
>>>> Are you running with those h/w timer delay patches?
>>> Nope.
>>
>> Ok.  So loops_per_jiffy must be too small.  My guess is you're using an
>> older kernel without 71c696b1 (calibrate: extract fall-back calculation
>> into own helper).
>>
> I am on V3.0-rc3+(latest mainline) and the above commit is already
> part of it.
>
>> The delay calibration code used to start out by setting:
>>
>> 	loops_per_jiffy = (1<<12);
>>
>> This will shorten the delay right down, and that's probably causing these
>> false spinlock lockup bug dumps.
>>
>> Arranging for IRQs to be disabled across the delay calibration just avoids
>> the issue by preventing any spinlock being taken.
>>
>> The reason that CPU#0 also complains about spinlock lockup is that for
>> some reason CPU#1 never finishes its calibration, and so the loop also
>> times out early on CPU#0.
>>
> I am not sure but what I think is happening is as soon as interrupts
> start firing, as part of IRQ handling, scheduler will try to
> enqueue softIRQ thread for newly booted CPU since it sees that
> it's active and ready. But that's failing and both CPU's
> eventually lock-up. But I may be wrong here.

Even if that happens, there is NO WAY that the spinlock lockup detector
should report lockup in anything under 1s.

>> Of course, fiddling with this global variable in this way is _not_ a good
>> idea while other CPUs are running and using that variable.
>>
>> We could also do with implementing trigger_all_cpu_backtrace() to get
>> backtraces from the other CPUs when spinlock lockup happens...
>
> Any pointers on the other question about "why we need to enable
> interrupts before the CPU is ready?"

To ensure that things like the delay loop calibration and twd calibration
can run, though that looks like it'll run happily enough with the boot
CPU updating jiffies.

However, I'm still not taking your patch because I believe its just
papering over the real issue, which is not as you describe.

You first need to work out why the spinlock lockup detection is firing
after just 61us rather than the full 1s and fix that.

You then need to work out whether you really do have spinlock lockup,
and if so, why.  Implementing trigger_all_cpu_backtrace() may help to
find out what CPU#0 is doing, though we can only do that with IRQs on,
and so would be fragile.

We can test whether CPU#0 is going off to do something else while CPU#1
is being brought up, by adding a preempt_disable() / preempt_enable()
in __cpu_up() to prevent the wait-for-cpu#1-online being preempted by
other threads - I suspect you'll still see spinlock lockup on the
xtime seqlock on CPU#1 though.  That would suggest a coherency issue.

Finally, how are you provoking this - and what kernel configuration are
you using?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ