[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJhHMCA_85kAxtL4TKGS0bvOD+OtPY5Fvi=O=RC6gXfE7JopFQ@mail.gmail.com>
Date: Mon, 28 Jul 2014 11:18:49 -0400
From: Pranith Kumar <bobby.prani@...il.com>
To: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Josh Triplett <josh@...htriplett.org>,
Steven Rostedt <rostedt@...dmis.org>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
Lai Jiangshan <laijs@...fujitsu.com>,
"open list:READ-COPY UPDATE..." <linux-kernel@...r.kernel.org>
Subject: Re: [RFC PATCH 1/1] rcu: Use separate wait queues for leaders and followers
On Mon, Jul 28, 2014 at 10:58 AM, Pranith Kumar <bobby.prani@...il.com> wrote:
> Commit fbce7497ee5a ("rcu: Parallelize and economize NOCB kthread wakeups")
>
> tries to reduce the wake up overhead by creating leader and follower nocb
> kthreads.
>
> One thing overlooked here is that all the kthreads wait on the same wait queue.
> When we try to wake up the leader threads on the wait queue, we also try to wake
> up the follower threads because of which there is still wake up overhead.
>
> This commit tries to avoid that by using separate wait queues for the leaders and
> followers.
This solution is still not the best we can do. All the followers will
still wait in one wait queue and we try to wake them up all each time
we call wake_up on the follower wait queue.
To illustrate, let me take the original example in the above commit.
Let us say there are 4096 nocb kthreads, 64 leaders each of which have
64 followers.
The grace period kthread will now try to wake up only 64 leader nocb
kthreads, which is better than before, but each leader will try to
wake up all the followers on the follower wait queue. It would be
great if each leader had its own follower wait queue to wake up, but I
guess that is a stretch.
Thoughts?
--
Pranith
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists