lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 5 May 2011 15:58:19 +0200 (CEST)
From:	Thomas Gleixner <tglx@...utronix.de>
To:	Andi Kleen <andi@...stfloor.org>
cc:	Dave Kleikamp <dkleikamp@...il.com>,
	Chris Mason <chris.mason@...cle.com>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Tim Chen <tim.c.chen@...ux.intel.com>,
	linux-kernel@...r.kernel.org, lenb@...nel.org, paulmck@...ibm.com
Subject: Re: idle issues running sembench on 128 cpus

On Thu, 5 May 2011, Thomas Gleixner wrote:
> On Thu, 5 May 2011, Andi Kleen wrote:
> > > No, it does not even need refcounting. We can access it outside of the
> > 
> > Ok.
> > 
> > > lock as this is atomic context called on the cpu which is about to go
> > > idle and therefor the device cannot go away. Easy and straightforward
> > > fix.
> > 
> > Ok. Patch appended. Looks good?
> 
> Mostly. See below.
>  
> > BTW why must the lock be irqsave?
> 
> Good question. Probably safety frist paranoia :)
> 
> Indeed that code should only be called from irq disabled regions, so
> we could avoid the irqsave there. Otherwise that needs to be irqsave
> for obvious reasons.

Just looked through all the call sites. Both intel_idle and
processor_idle notify ENTER with interrups disabled, but EXIT with
interrupts enabled. So when we want to remove irqsave from the
spinlock that needs to be fixed as well.

Thanks,

        tglx
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ