lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 13 Nov 2012 14:23:50 -0800
From:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To:	Jacob Pan <jacob.jun.pan@...ux.intel.com>
Cc:	Linux PM <linux-pm@...r.kernel.org>,
	LKML <linux-kernel@...r.kernel.org>,
	Rafael Wysocki <rafael.j.wysocki@...el.com>,
	Len Brown <len.brown@...el.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	"H. Peter Anvin" <hpa@...or.com>, Ingo Molnar <mingo@...e.hu>,
	Zhang Rui <rui.zhang@...el.com>, Rob Landley <rob@...dley.net>,
	Arjan van de Ven <arjan@...ux.intel.com>
Subject: Re: [PATCH 3/3] PM: Introduce Intel PowerClamp Driver

On Tue, Nov 13, 2012 at 01:39:22PM -0800, Jacob Pan wrote:
> On Tue, 13 Nov 2012 13:16:02 -0800
> "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com> wrote:
> 
> > > Please refer to Documentation/thermal/intel_powerclamp.txt for more
> > > details.  
> > 
> > If I read this correctly, this forces a group of CPUs into idle for
> > about 600 milliseconds at a time.  This would indeed delay grace
> > periods, which could easily result in user complaints.  Also, given
> > the default RCU_BOOST_DELAY of 500 milliseconds in kernels enabling
> > RCU_BOOST, you would see needless RCU priority boosting.
> > 
> the default idle injection duration is 6ms. we adjust the sleep
> interval to ensure idle ratio. So the idle duration stays the same once
> set. So would it be safe to delay grace period for this small amount in
> exchange for less over head in each injection period?

Ah, 6ms of delay is much better than 600ms.  Should be OK (famous last
words!).

> > Of course, if the idle period extended for longer, you would see RCU
> > CPU stall warnings.  And if the idle period extended indefinitely, you
> > could hang the system -- the RCU callbacks on the idled CPU could not
> > be invoked, and if one of those RCU callbacks was waking someone up,
> > that someone would not be woken up.
> > 
> for the same algorithm, idle duration is not extended. the injected
> idle loop also yield to pending softirqs, i guess that is what rcu
> callbacks are using?

For most kernel configuration options, it does use softirq.  And yes,
the kthread you are using would yield to softirqs -- but only as long
as softirq processing hasn't moved over to ksoftirqd.  Longer term,
RCU will be moving from softirq to kthreads, though, and these might be
prempted by your powerclamp kthread, depending on priorities.  It looks
like you use RT prio 50, which would usually preempt the RCU kthreads
(unless someone changed the priorities).

> > It looks like you could end up with part of the system powerclamped
> > in some situations, and with all of it powerclamped in other
> > situations. Is that the case, or am I confused?
> > 
> could you explain the part that is partially powerclamped?

Suppose that a given system has two sockets.  Are the two sockets
powerclamped independently, or at the same time?  My guess was the
former, but looking at the code again, it looks like the latter.
So it is a good thing I asked, I guess.  ;-)

 							Thanx, Paul

> [Jacob Pan]
> 
> -- 
> Thanks,
> 
> Jacob
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ