[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200801121833.09508.ak@suse.de>
Date: Sat, 12 Jan 2008 18:33:09 +0100
From: Andi Kleen <ak@...e.de>
To: Peter Zijlstra <a.p.zijlstra@...llo.nl>
Cc: Benjamin LaHaise <bcrl@...ck.org>, dipankar@...ibm.com,
Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org, linux-arch@...r.kernel.org,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
rostedt <rostedt@...dmis.org>, Ingo Molnar <mingo@...e.hu>
Subject: Re: [PATCH/RFC] synchronize_rcu(): high latency on idle system
On Saturday 12 January 2008 10:23:11 Peter Zijlstra wrote:
>
> On Fri, 2008-01-11 at 20:26 -0500, Benjamin LaHaise wrote:
> > Hello folks,
> >
> > I'd like to put the patch below out for comments to see if folks think the
> > approach is a valid fix to reduce the latency of synchronize_rcu(). The
> > motivation is that an otherwise idle system takes about 3 ticks per network
> > interface in unregister_netdev() due to multiple calls to synchronize_rcu(),
> > which adds up to quite a few seconds for tearing down thousands of
> > interfaces. By flushing pending rcu callbacks in the idle loop, the system
> > makes progress hundreds of times faster. If this is indeed a sane thing to,
> > it probably needs to be done for other architectures than x86. And yes, the
> > network stack shouldn't call synchronize_rcu() quite so much, but fixing that
> > is a little more involved.
>
> So, instead of only relying on the tick to drive the RCU state machine,
> you add the idle loop to it. This seems to make sense, esp because nohz
> is held off until rcu is idle too.
For NOHZ I agree it would be probably better to just force a quiescent
cycle than to schedule a one jiffie tick like it is currently done.
For non NOHZ I'm not so sure.
-Andi
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists