[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.1.10.0904221700030.32682@qirst.com>
Date: Wed, 22 Apr 2009 17:09:28 -0400 (EDT)
From: Christoph Lameter <cl@...ux.com>
To: Eric Dumazet <dada1@...mosbay.com>
cc: David Miller <davem@...emloft.net>,
Michael Chan <mchan@...adcom.com>,
Ben Hutchings <bhutchings@...arflare.com>,
netdev@...r.kernel.org
Subject: Re: udp ping pong measurements from 2.6.22 to .30 with various cpu
affinities
On Wed, 22 Apr 2009, Eric Dumazet wrote:
> Check /proc/cpuinfo, and check it doesnt change between kernel version.
Hmmm.... It does not since it depends on the way that the machine is
configured by the firmware.
> > Results follow (a nice diagram is available from
> > http://gentwo.org/results/udpping-tests-2.pdf
>
> Nice graphs, but lack of documentation of test conditions.
What would you like to know?
> > Observations:
> > - Pinning to the same cpu yields almost 8 usecs vs. another cpu sharing
> > the same L2.
> > - Strangely the cpu not sharing the l2 is better than a cpu with the same
> > L2.
>
> When I see strange results like that, I ask to myself :
> Is the problem located at the looked-at system, or at the observer ?
And that means?
> We already pointed it was probably scheduling. Since ICMP pings dont use
> processes and no regression here. Patching kernel to implement udpping
> at softirq level should be quite easy if you really want to check UDP stack.
>
> Last network improvements focused on scalability more than latencies.
> (multi-flows, not single flow !)
Latencies have priority here. Multi-flows are secondary.
So I guess this means that you are okay with the network stacks latency
creep? Even if its only 1.5usec: In practice you tune the NIC to perform
much better and this is a latency increase likely occurring in every
packet transmission.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists