[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110508154854.GT2641@linux.vnet.ibm.com>
Date: Sun, 8 May 2011 08:48:54 -0700
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: Alex Bligh <alex@...x.org.uk>
Cc: Eric Dumazet <eric.dumazet@...il.com>, netdev@...r.kernel.org
Subject: Re: Scalability of interface creation and deletion
On Sun, May 08, 2011 at 04:17:42PM +0100, Alex Bligh wrote:
> Paul,
>
> >>No, I waited a few minutes after boot for the system to stabilize, and
> >>all CPUs were definitely online.
> >>
> >>The patch to the kernel I am running is below.
> >
> >OK, interesting...
> >
> >My guess is that you need to be using ktime_get_ts(). Isn't ktime_get()
> >subject to various sorts of adjustment?
>
> It's Eric's code, not mine, but:
>
> kernel/time/timekeeping.c suggests they do the same thing
> (adjust xtime by wall_to_monotonic), just one returns a
> struct timespec and the other returns a ktime_t.
>
> >>>> There is nothing much going on these systems (idle, no other users,
> >>>> just normal system daemons).
> >>>
> >>> And normal system daemons might cause this, right?
> >>
> >>Yes. Everything is normal, except I did
> >>service udev stop
> >>unshare -n bash
> >>which together stop the system running interface scripts when
> >>interfaces are created (as upstart and upstart-udev-bridge are
> >>now integrated, you can't kill upstart, so you have to rely on
> >>unshare -n to stop the events being propagated). That's just
> >>to avoid measuring the time it takes to execute the scripts.
> >
> >OK, so you really could be seeing grace periods started by these system
> >daemons.
>
> In 50% of 200 calls? That seems pretty unlikely. I think it's more
> likely to be the 6 jiffies per call to ensure cpus are idle,
> plus the 3 calls per interface destroy.
>
> If 6 jiffies per call to ensure cpus are idle is a fact of life,
> then the question goes back to why interface removal is waiting
> for rcu readers to be released synchronously, as opposed to
> doing the update bits synchronously, then doing the reclaim
> element (freeing the memory) afterwards using call_rcu.
This would speed things up considerably, assuming that there is no
other reason to block for an RCU grace period.
Thanx, Paul
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists