lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20081012224629.GA7353@linux.vnet.ibm.com>
Date:	Sun, 12 Oct 2008 15:46:29 -0700
From:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To:	Manfred Spraul <manfred@...orfullife.com>
Cc:	linux-kernel@...r.kernel.org, cl@...ux-foundation.org,
	mingo@...e.hu, akpm@...ux-foundation.org, dipankar@...ibm.com,
	josht@...ux.vnet.ibm.com, schamp@....com, niv@...ibm.com,
	dvhltc@...ibm.com, ego@...ibm.com, laijs@...fujitsu.com,
	rostedt@...dmis.org, peterz@...radead.org, penberg@...helsinki.fi,
	andi@...stfloor.org, tglx@...utronix.de
Subject: Re: [PATCH, RFC] v7 scalable classic RCU implementation

On Sun, Oct 12, 2008 at 05:52:56PM +0200, Manfred Spraul wrote:
> Paul E. McKenney wrote:
>> +/*
>> + * If the specified CPU is offline, tell the caller that it is in
>> + * a quiescent state.  Otherwise, whack it with a reschedule IPI.
>> + * Grace periods can end up waiting on an offline CPU when that
>> + * CPU is in the process of coming online -- it will be added to the
>> + * rcu_node bitmasks before it actually makes it online.  Because this
>> + * race is quite rare, we check for it after detecting that the grace
>> + * period has been delayed rather than checking each and every CPU
>> + * each and every time we start a new grace period.
>> + */
>
> What about using CPU_DYING and CPU_STARTING?
>
> Then this race wouldn't exist anymore.

Because I don't want to tie RCU too tightly to the details of the
online/offline implementation.  It is too easy for someone to make a
"simple" change and break things, especially given that the online/offline
code still seems to be adjusting a bit.

So I might well use CPU_DYING and CPU_STARTING, but I would still keep
the check offlined CPUs in the force_quiescent_state() processing.

>> +static void force_quiescent_state(struct rcu_state *rsp, int relaxed)
>> +{
>> + [snip]
>> +       case RCU_FORCE_QS:
>> +
>> +               /* Check dyntick-idle state, send IPI to laggarts. */
>> +               if (rcu_process_dyntick(rsp, 
>> dyntick_recall_completed(rsp),
>> +                                       rcu_implicit_dynticks_qs))
>> +                       goto unlock_ret;
>> +
>> +               /* Leave state in case more forcing is required. */
>> +
>> +               break;
>
> Hmm - your code must loop multiple times over the cpus.
> I've use a different approach: More forcing is only required for a nohz cpu 
> when it was hit within a long-running interrupt.
> Thus I've added a '->kick_poller' flag, rcu_irq_exit() reports back when 
> the long-running interrupt completes. Never more than one loop over the 
> outstanding cpus is required.

Do you send a reschedule IPI to CPUs that are not in dyntick idle mode,
but who have failed to pass through a quiescent state?

In my case, more forcing is required only for a nohz CPU in a long-running
interrupt (as with your approach), for sending the aforementioned IPI,
and for checking for offlined CPUs as noted above.

							Thanx, Paul
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ