lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <1264603952.31321.469.camel@gandalf.stny.rr.com>
Date:	Wed, 27 Jan 2010 09:52:32 -0500
From:	Steven Rostedt <rostedt@...dmis.org>
To:	paulmck@...ux.vnet.ibm.com
Cc:	linux-kernel@...r.kernel.org, mingo@...e.hu, laijs@...fujitsu.com,
	dipankar@...ibm.com, akpm@...ux-foundation.org,
	mathieu.desnoyers@...ymtl.ca, josh@...htriplett.org,
	dvhltc@...ibm.com, niv@...ibm.com, tglx@...utronix.de,
	peterz@...radead.org, Valdis.Kletnieks@...edu, dhowells@...hat.com
Subject: Re: [PATCH RFC tip/core/rcu] accelerate grace period if last
 non-dynticked CPU

On Wed, 2010-01-27 at 06:11 -0800, Paul E. McKenney wrote:
> On Mon, Jan 25, 2010 at 10:12:03AM -0500, Steven Rostedt wrote:
> > On Sun, 2010-01-24 at 19:48 -0800, Paul E. McKenney wrote:
> > 
> > > +/*
> > > + * Check to see if any future RCU-related work will need to be done
> > > + * by the current CPU, even if none need be done immediately, returning
> > > + * 1 if so.  This function is part of the RCU implementation; it is -not-
> > > + * an exported member of the RCU API.
> > > + *
> > > + * Because we are not supporting preemptible RCU, attempt to accelerate
> > > + * any current grace periods so that RCU no longer needs this CPU, but
> > > + * only if all other CPUs are already in dynticks-idle mode.  This will
> > > + * allow the CPU cores to be powered down immediately, as opposed to after
> > > + * waiting many milliseconds for grace periods to elapse.
> > > + */
> > > +int rcu_needs_cpu(int cpu)
> > > +{
> > > +	int c = 1;
> > > +	int i;
> > > +	int thatcpu;
> > > +
> > > +	/* Don't bother unless we are the last non-dyntick-idle CPU. */
> > > +	for_each_cpu(thatcpu, nohz_cpu_mask)
> > > +		if (thatcpu != cpu)
> > > +			return rcu_needs_cpu_quick_check(cpu);
> > > +
> > > +	/* Try to push remaining RCU-sched and RCU-bh callbacks through. */
> > > +	for (i = 0; i < RCU_NEEDS_CPU_FLUSHES && c; i++) {
> > > +		c = 0;
> > > +		if (per_cpu(rcu_sched_data, cpu).nxtlist) {
> > > +			c = 1;
> > > +			rcu_sched_qs(cpu);
> > > +			force_quiescent_state(&rcu_sched_state, 0);
> > > +			__rcu_process_callbacks(&rcu_sched_state,
> > > +						&per_cpu(rcu_sched_data, cpu));
> > 
> > > +		}
> > > +		if (per_cpu(rcu_bh_data, cpu).nxtlist) {
> > > +			c = 1;
> > > +			rcu_bh_qs(cpu);
> > > +			force_quiescent_state(&rcu_bh_state, 0);
> > > +			__rcu_process_callbacks(&rcu_bh_state,
> > > +						&per_cpu(rcu_bh_data, cpu));
> > > +		}
> > > +	}
> > > +
> > > +	/* If RCU callbacks are still pending, RCU still needs this CPU. */
> > > +	return c;
> > 
> > What happens if the last loop pushes out all callbacks? Then we would be
> > returning 1 when we could really be returning 0. Wouldn't a better
> > answer be:
> > 
> > 	return per_cpu(rcu_sched_data, cpu).nxtlist ||
> > 		per_cpu(rcu_bh_data, cpu).nxtlist;
> 
> Good point!!!
> 
> Or I can move the assignment to "c" to the end of each branch of the
> "if" statement, and do something like the following:
> 
> 	c = !!per_cpu(rcu_sched_data, cpu).nxtlist;

Hmm, that may just add obfuscation to those looking at the code. 

> 
> But either way, you are right, it does not make sense to go to all the
> trouble of forcing a grace period and then failing to take advantage
> of it.

Yeah, whatever implementation is fine, as long as it works and takes
advantage of all forced grace periods.

-- Steve


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ