[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-id: <483CB301.40007@sun.com>
Date: Tue, 27 May 2008 18:18:57 -0700
From: Matheos Worku <Matheos.Worku@....COM>
To: Jesper Krogh <jesper@...gh.cc>
Cc: David Miller <davem@...emloft.net>, yhlu.kernel@...il.com,
linux-kernel@...r.kernel.org, netdev@...r.kernel.org
Subject: Re: NIU - Sun Neptune 10g - Transmit timed out reset (2.6.24)
Jesper Krogh wrote:
> David Miller wrote:
>
>> From: Jesper Krogh <jesper@...gh.cc>
>> Date: Mon, 26 May 2008 22:54:53 +0200
>>
>>> Applied and running.. I've now pushed 400GB of data through it
>>> trying to
>>> get it to hit the bug but it is still running.
>>>
>>> So without saying that it solved the problem, it definately seems so.
>>> 2.6.26-rc4 + above patch.
>>
>>
>> Thanks for testing.
>
>
> Ok. I was too early out.. it ended up in the same situation again.
>
> May 27 08:09:12 hest kernel: [42953871.982072] NETDEV WATCHDOG: eth4:
> transmit timed out
> May 27 08:09:17 hest kernel: [42953877.827797] NETDEV WATCHDOG: eth4:
> transmit timed out
> May 27 08:09:22 hest kernel: [42953883.958375] NETDEV WATCHDOG: eth4:
> transmit timed out
> May 27 08:09:27 hest kernel: [42953890.668401] NETDEV WATCHDOG: eth4:
> transmit timed out
>
>
> Jesper
Dave,
Considering that fixing the HW would take considerable time, I was
wondering if the scheme we use in the nxge driver could be considered as
a workaround. Since the niu driver is already doing skb_orphan as a work
around, what if already transmitted TX buffers are reclaimed
periodically, within dev->hard_start_xmit() ? Then TX_DESC_MARK would
be set if/when available TX descriptor count falls below some watermark.
Disable device TX queue about the time TX_DESC_MARK is set and enable
it within TX interrupt.
Regards
Matheos
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists