lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20090508150634.GC6788@linux.vnet.ibm.com>
Date:	Fri, 8 May 2009 08:06:34 -0700
From:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To:	Christoph Lameter <cl@...ux.com>
Cc:	Peter Zijlstra <peterz@...radead.org>,
	Alok Kataria <akataria@...are.com>,
	"H. Peter Anvin" <hpa@...or.com>, Ingo Molnar <mingo@...e.hu>,
	Thomas Gleixner <tglx@...utronix.de>,
	the arch/x86 maintainers <x86@...nel.org>,
	LKML <linux-kernel@...r.kernel.org>,
	"alan@...rguk.ukuu.org.uk" <alan@...rguk.ukuu.org.uk>
Subject: Re: [PATCH] x86: Reduce the default HZ value

On Fri, May 08, 2009 at 10:16:10AM -0400, Christoph Lameter wrote:
> On Fri, 8 May 2009, Paul E. McKenney wrote:
> 
> > > Can't you simply enter idle state after a grace period completes and
> > > finds no pending callbacks for the next period. And leave idle state at
> > > the next call_rcu()?
> >
> > If there were no RCU callbacks -globally- across all CPUs, yes.  But
> > the check at the end of rcu_irq_exit() is testing only on the current
> > CPU.  Checking across all CPUs is expensive and racy.
> >
> > So what happens instead is that there is rcu_needs_cpu(), which gates
> > entry into dynticks-idle mode.  This function returns 1 if there are
> > callbacks on the current CPU.  So, if no CPU has an RCU callback, then
> > all CPUs can enter dynticks-idle mode so that the entire system is
> > quiescent from an RCU viewpoint -- no RCU processing at all.
> 
> Did not follow RCU developments. But wasnt there a time when RCU periods
> were processor specific and a global RCU period was done when all the
> processors went through their rcu periods?

For non-realtime RCU implementations, after a given grace period starts,
once each CPU goes through a "quiescent state", then that grace period
can end.  For realtime (AKA "preemptable") RCU, the focus is on tasks
rather than CPUs, but the same general principle applies, give or take
some implementation details: after a given grace period starts, once
each task goes through a quiescent state, then that grace period can end.

> Cpu cache hotness may not be relevant to RCU since rcu involves long time
> periods in which cachelines cool down. Can the RCU callbacks all be done
> on processor 0 (or a so designated processor)? That would avoiding
> disturbances of the other processors.

This approach -might- be OK for a specially configured and protected HPC
machine.  But it is a non-starter for general-purpose machines.  For an
example of why, consider a denial-of-service attack that continually
change routing tables could saturate CPU 0 and start falling behind,
eventually OOMing the machine.

But if you would like to experiment with this, make call_rcu() be a
wrapper that causes an underlying call_rcu_cpu_0() to be executed on
CPU 0.  That won't get exactly the cache-warmth effects that you are
after, but it will let you see whether the machine would gracefully
handle various events that might dump large numbers of callbacks.

							Thanx, Paul
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ