[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090624064236.GE31415@kernel.dk>
Date: Wed, 24 Jun 2009 08:42:37 +0200
From: Jens Axboe <jens.axboe@...cle.com>
To: Jesper Dangaard Brouer <hawk@...x.dk>
Cc: "David S. Miller" <davem@...emloft.net>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
dougthompson@...ssion.com, bluesmoke-devel@...ts.sourceforge.net,
Patrick McHardy <kaber@...sh.net>,
christine.caulfield@...glemail.com, Trond.Myklebust@...app.com,
linux-wireless@...r.kernel.org, johannes@...solutions.net,
yoshfuji@...ux-ipv6.org, shemminger@...ux-foundation.org,
linux-nfs@...r.kernel.org, bfields@...ldses.org, neilb@...e.de,
linux-ext4@...r.kernel.org, tytso@....edu, adilger@....com,
netfilter-devel@...r.kernel.org
Subject: Re: [PATCH 09/10] cfq-iosched: Uses its own open-coded rcu_barrier.
On Tue, Jun 23 2009, Jesper Dangaard Brouer wrote:
> This module cfq-iosched, has discovered the value of waiting for
> call_rcu() completion, but its has its own open-coded implementation
> of rcu_barrier(), which I don't think is 'strong' enough.
>
> This patch only leaves a comment for the maintainers to consider.
We need a stronger primitive and rcu_barrier(), since we also need to
wait for the rcu calls to even be scheduled. So I don't think the below
can be improved, it's already fine.
>
> Signed-off-by: Jesper Dangaard Brouer <hawk@...x.dk>
> ---
>
> block/cfq-iosched.c | 6 ++++++
> 1 files changed, 6 insertions(+), 0 deletions(-)
>
> diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
> index 833ec18..c15555b 100644
> --- a/block/cfq-iosched.c
> +++ b/block/cfq-iosched.c
> @@ -2657,6 +2657,12 @@ static void __exit cfq_exit(void)
> /*
> * this also protects us from entering cfq_slab_kill() with
> * pending RCU callbacks
> + *
> + * hawk@...x.dk 2009-06-18: Maintainer please consider using
> + * rcu_barrier() instead of this open-coded wait for
> + * completion implementation. I think it provides a better
> + * guarantee that all CPUs are finished, although
> + * elv_ioc_count_read() do consider all CPUs.
> */
> if (elv_ioc_count_read(ioc_count))
> wait_for_completion(&all_gone);
>
--
Jens Axboe
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists