lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sun, 17 May 2009 15:08:35 -0700
From:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To:	Evgeniy Polyakov <zbr@...emap.net>
Cc:	linux-kernel@...r.kernel.org, netdev@...r.kernel.org,
	netfilter-devel@...r.kernel.org, mingo@...e.hu,
	akpm@...ux-foundation.org, torvalds@...ux-foundation.org,
	davem@...emloft.net, dada1@...mosbay.com,
	jeff.chua.linux@...il.com, paulus@...ba.org, laijs@...fujitsu.com,
	jengelh@...ozas.de, r000n@...0n.net, benh@...nel.crashing.org,
	mathieu.desnoyers@...ymtl.ca
Subject: Re: [PATCH RFC] v5 expedited "big hammer" RCU grace periods

On Mon, May 18, 2009 at 12:02:23AM +0400, Evgeniy Polyakov wrote:
> Hi.

Hello, Evgeniy!

> On Sun, May 17, 2009 at 12:11:41PM -0700, Paul E. McKenney (paulmck@...ux.vnet.ibm.com) wrote:
> > Fifth cut of "big hammer" expedited RCU grace periods.  This uses per-CPU
> > kthreads that are scheduled in parallel by a call to smp_call_function()
> > by yet another kthread.  The synchronize_sched(), synchronize_rcu(),
> > and synchronize_bh() primitives wake this kthread up and then wait for
> > it to force the grace period.
> 
> I'm curious, but doesn't the fact that registered 'barrier' callback is
> invoked mean grace period completion? I.e. why to bother with
> rescheduling, waiting for thread to complete and so on, when we only
> care in the fact that 'barrier' callback is invoked, and thus all
> previous ones are completed?
> Or it is done just for the simplicity, since all rescheduling machinery
> already manages the rcu bits correctly, so you do not want to put it
> directly into 'barrier' callback?

It is a short-term expedient course of action.  Longer term, I will drop
rcuclassic in favor of rcutree, and then merge rcupreempt into rcutree.
I will then add machinery to rcutree to handle expedited grace periods
(somewhat) more naturally.  Trying to expedite three very different RCU
implementations seems a bit silly, hence the current off-on-the-side
approach.

But even then I will avoid relying on a "barrier" callback, or, indeed,
any sort of callback, because we don't want expedited grace periods to
have to wait on invocation of earlier RCU callbacks.  There will thus
not be a call_rcu_expedited(), at least not unless someone comes up with
a -really- compelling reason why.

But the exercise of going through several possible implementations was
quite useful, as I learned a number of things that will improve the
eventual rcutree implementation.  Like the fact that expedited grace
periods don't want to be waiting on invocation of prior callbacks.  ;-)

And rcutiny is, as always, as special case.  Here is the implementation
of synchronize_rcu_expedited() in rcutiny:

	void synchronize_rcu_expedited(void)
	{
	}

Or even:

	#define synchronize_rcu_expedited synchronize_rcu

;-)

							Thanx, Paul
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ