[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-id: <4824D1E1.10007@sun.com>
Date: Fri, 09 May 2008 15:36:17 -0700
From: Matheos Worku <Matheos.Worku@....COM>
To: Jesper Krogh <jesper@...gh.cc>
Cc: David Miller <davem@...emloft.net>, yhlu.kernel@...il.com,
linux-kernel@...r.kernel.org, netdev@...r.kernel.org
Subject: Re: NIU - Sun Neptune 10g - Transmit timed out reset (2.6.24)
Jesper Krogh wrote:
> Matheos Worku wrote:
>
>> Jesper Krogh wrote:
>>
>>> David Miller wrote:
>>>
>>>> From: Jesper Krogh <jesper@...gh.cc>
>>>> Date: Fri, 09 May 2008 20:32:53 +0200
>>>>
>>>>> When it works I doesnt seem to be able to get it pass 500MB/s.
>>>>
>>>>
>>>> With this card you really need multiple cpus and multiple threads
>>>> sending data through the card in order to fill the 10Gb pipe.
>>>>
>>>> Single connections will not fill the pipe.
>>>
>>>
>>> The server is a Sun X4600 with 8 x dual-core CPU's, setup with 64
>>> NFS-threads. The other end of the fiber goes into a switch with gigabit
>>> ports connected to 48 dual-core cpus. The test was done doing a dd on a
>>> 4.5GB file from the server to /dev/null on the clients.
>>
>>
>> Are you doing a TX or RX (with respect to the 10G if)?
>
>
> Thats a transmit.. from the NFS server to the clients.
I have observed TX throughput degradation (and increased CPU
utilization) occurs with increased # of connections, when CPU count > 4
CPUs. I don't think it is related to the driver (or HW). A while ago I
prototyped a driver which drops all UDP TX packets and the throughput
degradation (and CPU utilization increase) behavior occurred though the
driver was not doing much work. LSO/TSO seems to help with the
situation though. With LSO disabled, I have observed the issue on
several 10G nics.
Regards
Matheos
>
> Jesper
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists