[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1307725531.17300.58.camel@schen9-DESK>
Date: Fri, 10 Jun 2011 10:05:31 -0700
From: Tim Chen <tim.c.chen@...ux.intel.com>
To: Eric Dumazet <eric.dumazet@...il.com>
Cc: David Miller <davem@...emloft.net>,
netdev <netdev@...r.kernel.org>, Andi Kleen <andi@...stfloor.org>
Subject: Re: [PATCH net-next-2.6] inetpeer: lower false sharing effect
On Fri, 2011-06-10 at 06:31 +0200, Eric Dumazet wrote:
>
> Thanks Tim
>
> I have some questions for further optimizations.
>
> 1) How many different destinations are used in your stress load ?
> 2) Could you provide a distribution of the size of packet lengthes ?
> Or maybe the average length would be OK
>
>
>
Actually I have one load generator and one server connected to each
other via a 10Gb link.
The server is a 40 core 4 socket Westmere-EX machine and the load
generator is a 12 core 2 socket Westmere-EP machine.
There are 40 memcached daemons on the server each bound to a cpu core
and listening on a distinctive UDP port. The load generator has 40
threads, with each thread sending memcache request to a particular UDP
port.
The load generator's memcache request packet has a UDP payload of 25
bytes. The response packet from the daemon has a UDP payload of 13
bytes.
The UPD packets on the load generator and server are distributed across
16 Tx-Rx queues by hashing on the UDP ports (with slight modification of
the hash flags of ixgbe).
Thanks.
Tim
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists