[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110508124428.GJ2641@linux.vnet.ibm.com>
Date: Sun, 8 May 2011 05:44:28 -0700
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: Alex Bligh <alex@...x.org.uk>
Cc: Eric Dumazet <eric.dumazet@...il.com>, netdev@...r.kernel.org
Subject: Re: Scalability of interface creation and deletion
On Sun, May 08, 2011 at 10:35:02AM +0100, Alex Bligh wrote:
> Eric,
>
> --On 8 May 2011 09:12:22 +0200 Eric Dumazet <eric.dumazet@...il.com> wrote:
>
> >By the way, if I change HZ from 1000 to 100 I now have ten times slower
> >result :
>
> I repeated that test here. With HZ set to 1000 I got a total time of
> 4.022 seconds to remove 100 interfaces, of which:
>
> Total 3.03808 Usage 199 Average 0.01527 elsewhere
> Total 0.93992 Usage 200 Average 0.00470 synchronizing
>
> as opposed to a total of 27.917 seconds with HZ set to 100, of which
>
> Total 18.98515 Usage 199 Average 0.09540 elsewhere
> Total 8.77581 Usage 200 Average 0.04388 synchronizing
>
> Not quite a factor of 10 improvement, but nearly.
>
> I have CONFIG_RCU_FAST_NO_HZ=y
>
> I suspect this may just mean an rcu reader holds the rcu_read_lock
> for a jiffies related time. Though I'm having difficulty seeing
> what that might be on a system where the net is in essence idle.
OK, let's break it out...
4.022 seconds for 100 interfaces means about 40 milliseconds per interface.
My guess is that you have CONFIG_NO_HZ=y, which means that RCU needs to
figure out that various CPUs are in dyntick-idle state, which is a minimum
of 6 jiffies. It could be longer if a given CPU happens to be in IRQ
when RCU checks, so call it 9 jiffies. If you are doing the interfaces
synchronously, you will likely have to wait for a prior grace period (due
to background activity). So I can easily imagine 18 milliseconds for
HZ=1000. 40 milliseconds sounds a bit high, but perhaps not impossible.
Thanx, Paul
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists