lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 8 May 2009 05:50:23 -0700
From:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	Christoph Lameter <cl@...ux.com>,
	Alok Kataria <akataria@...are.com>,
	"H. Peter Anvin" <hpa@...or.com>, Ingo Molnar <mingo@...e.hu>,
	Thomas Gleixner <tglx@...utronix.de>,
	the arch/x86 maintainers <x86@...nel.org>,
	LKML <linux-kernel@...r.kernel.org>,
	"alan@...rguk.ukuu.org.uk" <alan@...rguk.ukuu.org.uk>
Subject: Re: [PATCH] x86: Reduce the default HZ value

On Fri, May 08, 2009 at 12:32:56PM +0200, Peter Zijlstra wrote:
> On Thu, 2009-05-07 at 11:01 -0700, Paul E. McKenney wrote:
> 
> > In general, I agree.  However, in the case where you have a single
> > CPU-bound task running in user mode, you don't care that much about
> > syscall performance.  So, yes, this would mean having yet another config
> > variable that users running big CPU-bound scientific applications would
> > need to worry about, which is not perfect either.
> > 
> > For whatever it is worth, the added overhead on entry would be something
> > like the following:
> > 
> > void rcu_irq_enter(void)
> > {
> > 	struct rcu_dynticks *rdtp = &__get_cpu_var(rcu_dynticks);
> > 
> > 	if (rdtp->dynticks_nesting++)
> > 		return;
> > 	rdtp->dynticks++;
> > 	WARN_ON_RATELIMIT(!(rdtp->dynticks & 0x1), &rcu_rs);
> > 	smp_mb(); /* CPUs seeing ++ must see later RCU read-side crit sects */
> > }
> > 
> > On exit, a bit more:
> > 
> > void rcu_irq_exit(void)
> > {
> > 	struct rcu_dynticks *rdtp = &__get_cpu_var(rcu_dynticks);
> > 
> > 	if (--rdtp->dynticks_nesting)
> > 		return;
> > 	smp_mb(); /* CPUs seeing ++ must see prior RCU read-side crit sects */
> > 	rdtp->dynticks++;
> > 	WARN_ON_RATELIMIT(rdtp->dynticks & 0x1, &rcu_rs);
> > 
> > 	/* If the interrupt queued a callback, get out of dyntick mode. */
> > 	if (__get_cpu_var(rcu_data).nxtlist ||
> > 	    __get_cpu_var(rcu_bh_data).nxtlist)
> > 		set_need_resched();
> > }
> > 
> > But I could move the callback check into call_rcu(), which would get the
> > overhead of rcu_irq_exit() down to about that of rcu_irq_enter().
> 
> Can't you simply enter idle state after a grace period completes and
> finds no pending callbacks for the next period. And leave idle state at
> the next call_rcu()?

If there were no RCU callbacks -globally- across all CPUs, yes.  But
the check at the end of rcu_irq_exit() is testing only on the current
CPU.  Checking across all CPUs is expensive and racy.

So what happens instead is that there is rcu_needs_cpu(), which gates
entry into dynticks-idle mode.  This function returns 1 if there are
callbacks on the current CPU.  So, if no CPU has an RCU callback, then
all CPUs can enter dynticks-idle mode so that the entire system is
quiescent from an RCU viewpoint -- no RCU processing at all.

Or am I missing what you are getting at with your question?

							Thanx, Paul
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ