lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 28 Jul 2014 11:18:49 -0400
From:	Pranith Kumar <>
To:	"Paul E. McKenney" <>,
	Josh Triplett <>,
	Steven Rostedt <>,
	Mathieu Desnoyers <>,
	Lai Jiangshan <>,
	"open list:READ-COPY UPDATE..." <>
Subject: Re: [RFC PATCH 1/1] rcu: Use separate wait queues for leaders and followers

On Mon, Jul 28, 2014 at 10:58 AM, Pranith Kumar <> wrote:
> Commit fbce7497ee5a ("rcu: Parallelize and economize NOCB kthread wakeups")
> tries to reduce the wake up overhead by creating leader and follower nocb
> kthreads.
> One thing overlooked here is that all the kthreads wait on the same wait queue.
> When we try to wake up the leader threads on the wait queue, we also try to wake
> up the follower threads because of which there is still wake up overhead.
> This commit tries to avoid that by using separate wait queues for the leaders and
> followers.

This solution is still not the best we can do. All the followers will
still wait in one wait queue and we try to wake them up all each time
we call wake_up on the follower wait queue.

To illustrate, let me take the original example in the above commit.
Let us say there are 4096 nocb kthreads, 64 leaders each of which have

The grace period kthread will now try to wake up only 64 leader nocb
kthreads, which is better than before, but each leader will try to
wake up all the followers on the follower wait queue. It would be
great if each leader had its own follower wait queue to wake up, but I
guess that is a stretch.


To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
More majordomo info at
Please read the FAQ at

Powered by blists - more mailing lists