lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 20 Jun 2011 16:55:43 +0530
From:	Santosh Shilimkar <santosh.shilimkar@...com>
To:	Russell King - ARM Linux <linux@....linux.org.uk>
CC:	Peter Zijlstra <peterz@...radead.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	linux-omap@...r.kernel.org, linux-kernel@...r.kernel.org,
	linux-arm-kernel@...ts.infradead.org
Subject: Re: [RFC PATCH] ARM: smp: Fix the CPU hotplug race with scheduler.

On 6/20/2011 4:43 PM, Russell King - ARM Linux wrote:
> On Mon, Jun 20, 2011 at 04:17:58PM +0530, Santosh Shilimkar wrote:
>> Yes. It's because of interrupt and the CPU active-online
>> race.
>
> I don't see that as a conclusion from this dump.
>
>> Here is the chash log..
>> [   21.025451] CPU1: Booted secondary processor
>> [   21.025451] CPU1: Unknown IPI message 0x1
>> [   21.029113] Switched to NOHz mode on CPU #1
>> [   21.029174] BUG: spinlock lockup on CPU#1, swapper/0, c06220c4
>
> That's the xtime seqlock.  We're trying to update the xtime from CPU1,
> which is not yet online and not yet active.  That's fine, we're just
> spinning on the spinlock here, waiting for the other CPUs to release
> it.
>
> But what this is saying is that the other CPUs aren't releasing it.
> The cpu hotplug code doesn't hold the seqlock either.  So who else is
> holding this lock, causing CPU1 to time out on it.
>
> The other thing is that this is only supposed to trigger after about
> one second:
>
>          u64 loops = loops_per_jiffy * HZ;
>                  for (i = 0; i<  loops; i++) {
>                          if (arch_spin_trylock(&lock->raw_lock))
>                                  return;
>                          __delay(1);
>                  }
>
> which from the timings you have at the beginning of your printk lines
> is clearly not the case - it's more like 61us.
>
> Are you running with those h/w timer delay patches?
Nope.

Regards
Santosh
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ