lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-id: <4824D39B.6040006@sun.com>
Date:	Fri, 09 May 2008 15:43:39 -0700
From:	Matheos Worku <Matheos.Worku@....COM>
To:	Matheos Worku <Matheos.Worku@....COM>
Cc:	Jesper Krogh <jesper@...gh.cc>, David Miller <davem@...emloft.net>,
	yhlu.kernel@...il.com, linux-kernel@...r.kernel.org,
	netdev@...r.kernel.org
Subject: Re: NIU - Sun Neptune 10g - Transmit timed out reset (2.6.24)

Matheos Worku wrote:

> Jesper Krogh wrote:
>
>> Matheos Worku wrote:
>>
>>> Jesper Krogh wrote:
>>>
>>>> David Miller wrote:
>>>>
>>>>> From: Jesper Krogh <jesper@...gh.cc>
>>>>> Date: Fri, 09 May 2008 20:32:53 +0200
>>>>>
>>>>>> When it works I doesnt seem to be able to get it pass 500MB/s.
>>>>>
>>>>>
>>>>>
>>>>> With this card you really need multiple cpus and multiple threads
>>>>> sending data through the card in order to fill the 10Gb pipe.
>>>>>
>>>>> Single connections will not fill the pipe.
>>>>
>>>>
>>>>
>>>> The server is a Sun X4600 with 8 x dual-core CPU's, setup with 64
>>>> NFS-threads. The other end of the fiber goes into a switch with 
>>>> gigabit
>>>> ports connected to 48 dual-core cpus. The test was done doing a dd 
>>>> on a
>>>> 4.5GB file from the server to /dev/null on the clients.
>>>
>>>
>>>
>>> Are you doing a TX or RX (with respect to the 10G if)?
>>
>>
>>
>> Thats a transmit.. from the NFS server to the clients.
>
Is MSI/MSI-X enabled on the kernel? I have noticed that it was not 
enabled on Gutsy-SPARC?

--Matheos

>
> I have observed TX throughput degradation (and increased CPU 
> utilization)  occurs with increased # of connections, when CPU count > 
> 4 CPUs.  I don't think it is related to the driver (or HW). A while 
> ago I prototyped a driver which drops all UDP TX packets and the 
> throughput degradation (and CPU utilization increase) behavior 
> occurred though the driver was not doing much work.   LSO/TSO seems to 
> help with the situation though. With LSO disabled, I have observed the 
> issue on several 10G nics.
>
> Regards
> Matheos
>
>
>>
>> Jesper
>
>
>
> -- 
> To unsubscribe from this list: send the line "unsubscribe 
> linux-kernel" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ