[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4856AF3C.9010904@krogh.cc>
Date: Mon, 16 Jun 2008 20:21:48 +0200
From: Jesper Krogh <jesper@...gh.cc>
To: Matheos Worku <Matheos.Worku@....COM>
CC: David Miller <davem@...emloft.net>, yhlu.kernel@...il.com,
linux-kernel@...r.kernel.org, netdev@...r.kernel.org
Subject: Re: NIU - Sun Neptune 10g - Transmit timed out reset (2.6.24)
Matheos Worku wrote:
> David Miller wrote:
>
>> From: Matheos Worku <Matheos.Worku@....COM>
>> Date: Thu, 29 May 2008 17:14:29 -0700
>>
>>
>>
>>> Actually what I am suggesting was a workaround for the lack of "TX
>>> Ring Empty" interrupt by not relying on the TX interrupt at all.
>>>
>>
>> Ahh I see.
>>
>> Some of the things I talked about in my presentation here in
>> Berlin at LinuxTAG yesterday can help mitigate the effects.
>> Most of it revolves around batching, and allowing the driver
>> to manage the backlog of packets directly when the TX queue
>> fills up.
>>
>> In such a case we could batch the TX queue refill, know how many more
>> TX packets we will queue up to the chip right now, and therefore know
>> that we can safely set periodic MARK bits and only need to force set
>> the MARK bit at the very end.
>>
>>
>>
>>> As for the TX hang, I will try to reproduce the problem and look at
>>> the registers for the clue.
>>>
> Have been trying but not able to reproduce the timeout. I am using NFS
> V3 with TCP. Are you using UDP by any chance?
I wouldn't say it is easy either.. I have never got it before getting a
few TB over the "wire". I've got proto=tcp in /proc/mounts for the
mountpoints, so I'd assume that I use TCP.
There is an Extreme Networks switch in the other end, I havent got
hardware to actually test that with a different card, so I cannot rule
the switch out either. .. but it would be strange.
Jesper
--
Jesper
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists