[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <47A1B294.8080609@aei.mpg.de>
Date: Thu, 31 Jan 2008 12:35:48 +0100
From: Carsten Aulbert <carsten.aulbert@....mpg.de>
To: Rick Jones <rick.jones2@...com>
CC: "Brandeburg, Jesse" <jesse.brandeburg@...el.com>,
Bruce Allen <ballen@...vity.phys.uwm.edu>,
netdev@...r.kernel.org,
Henning Fehrmann <henning.fehrmann@....mpg.de>,
Bruce Allen <bruce.allen@....mpg.de>
Subject: Re: e1000 full-duplex TCP performance well below wire speed
Good morning (my TZ),
I'll try to answer all questions, hoewver if I miss something big,
please point my nose to it again.
Rick Jones wrote:
>> As asked in LKML thread, please post the exact netperf command used to
>> start the client/server, whether or not you're using irqbalanced (aka
>> irqbalance) and what cat /proc/interrupts looks like (you ARE using MSI,
>> right?)
>
netperf was used without any special tuning parameters. Usually we start
two processes on two hosts which start (almost) simultaneously, last for
20-60 seconds and simply use UDP_STREAM (works well) and TCP_STREAM, i.e.
on 192.168.0.202: netperf -H 192.168.2.203 -t TCP_STREAL -l 20
on 192.168.0.203: netperf -H 192.168.2.202 -t TCP_STREAL -l 20
192.168.0.20[23] here is on eth0 which cannot do jumbo frames, thus we
use the .2. part for eth1 for a range of mtus.
The server is started on both nodes with the start-stop-daemon and no
special parameters I'm aware of.
/proc/interrupts shows me PCI_MSI-edge thus, I think YES.
> In particular, it would be good to know if you are doing two concurrent
> streams, or if you are using the "burst mode" TCP_RR with large
> request/response sizes method which then is only using one connection.
>
As outlined above: Two concurrent streams right now. If you think TCP_RR
should be better I'm happy to rerun some tests.
More in other emails.
I'll wade through them slowly.
Carsten
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists