[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <4BB3AB28.1050106@itcare.pl>
Date: Wed, 31 Mar 2010 22:06:00 +0200
From: Paweł Staszewski <pstaszewski@...are.pl>
To: "Tantilov, Emil S" <emil.s.tantilov@...el.com>
CC: "Allan, Bruce W" <bruce.w.allan@...el.com>,
Linux Network Development list <netdev@...r.kernel.org>,
"e1000-devel@...ts.sourceforge.net"
<e1000-devel@...ts.sourceforge.net>
Subject: Re: eth1: Detected Hardware Unit Hang
W dniu 2010-03-31 21:59, Tantilov, Emil S pisze:
> Paweł Staszewski wrote:
>
>> W dniu 2010-03-31 20:03, Tantilov, Emil S pisze:
>>
>>> Pawel Staszewski wrote:
>>>
>>>
>>>> Hello
>>>>
>>>> I reproduce this problem on other machine with the same hardware and
>>>> here is dmesg output: (kernel 2.6.33)
>>>>
>>>> Mar 27 18:19:16 TM_01_C1 [1817894.769395] 0000:04:00.0: eth0:
>>>> Detected Hardware Unit Hang: Mar 27 18:19:16 TM_01_C1
>>>> [1817894.769396] TDH<2e>
>>>> Mar 27 18:19:16 TM_01_C1 [1817894.769397] TDT<1a>
>>>> Mar 27 18:19:16 TM_01_C1 [1817894.769397] next_to_use<1a>
>>>> Mar 27 18:19:16 TM_01_C1 [1817894.769398] next_to_clean<2d>
>>>> Mar 27 18:19:16 TM_01_C1 [1817894.769398]
>>>> buffer_info[next_to_clean]: Mar 27 18:19:16 TM_01_C1
>>>> [1817894.769399] time_stamp<11b1591e9>
>>>> Mar 27 18:19:16 TM_01_C1 [1817894.769399] next_to_watch<2f>
>>>> Mar 27 18:19:16 TM_01_C1 [1817894.769400] jiffies<11b1592e4>
>>>> Mar 27 18:19:16 TM_01_C1 [1817894.769401] next_to_watch.status<0>
>>>> Mar 27 18:19:16 TM_01_C1 [1817894.769401] MAC Status<80080783>
>>>> Mar 27 18:19:16 TM_01_C1 [1817894.769402] PHY Status<796d>
>>>> Mar 27 18:19:16 TM_01_C1 [1817894.769402] PHY 1000BASE-T
>>>> Status<3800> Mar 27 18:19:16 TM_01_C1 [1817894.769403] PHY Extended
>>>> Status<3000> Mar 27 18:19:16 TM_01_C1 [1817894.769404] PCI
>>>> Status<10>
>>>> Mar 27 18:19:18 TM_01_C1 [1817896.773365] 0000:04:00.0: eth0:
>>>> Detected Hardware Unit Hang: Mar 27 18:19:18 TM_01_C1
>>>> [1817896.773367] TDH<2e>
>>>> Mar 27 18:19:18 TM_01_C1 [1817896.773368] TDT<1a>
>>>> Mar 27 18:19:18 TM_01_C1 [1817896.773368] next_to_use<1a>
>>>> Mar 27 18:19:18 TM_01_C1 [1817896.773369] next_to_clean<2d>
>>>> Mar 27 18:19:18 TM_01_C1 [1817896.773369]
>>>> buffer_info[next_to_clean]: Mar 27 18:19:18 TM_01_C1
>>>> [1817896.773370] time_stamp<11b1591e9>
>>>> Mar 27 18:19:18 TM_01_C1 [1817896.773370] next_to_watch<2f>
>>>> Mar 27 18:19:18 TM_01_C1 [1817896.773371] jiffies<11b1594d8>
>>>> Mar 27 18:19:18 TM_01_C1 [1817896.773372] next_to_watch.status<0>
>>>> Mar 27 18:19:18 TM_01_C1 [1817896.773372] MAC Status<80080783>
>>>> Mar 27 18:19:18 TM_01_C1 [1817896.773373] PHY Status<796d>
>>>> Mar 27 18:19:18 TM_01_C1 [1817896.773373] PHY 1000BASE-T
>>>> Status<3800> Mar 27 18:19:18 TM_01_C1 [1817896.773374] PHY Extended
>>>> Status<3000> Mar 27 18:19:18 TM_01_C1 [1817896.773375] PCI
>>>> Status<10>
>>>> Mar 27 18:19:20 TM_01_C1 [1817898.769353] 0000:04:00.0: eth0:
>>>> Detected
>>>>
>>>>
>>> <snip>
>>>
>>> I have a similar (not the same model) system in the lab with
>>> 82573E/L on board, but was not able to reproduce the Tx hangs you
>>> reported. So at this point we need to start looking into more
>>> details. Could you please file a bug at e1000.sf.net? Include the
>>> information you provided so far and also:
>>> 1. output from ethtoool -e
>>> 2. ethtool -d
>>> 3. cat /proc/interrupts
>>> 4. full dmesg output from boot to the point where Tx hangs occurred.
>>> 5. kernel config file
>>>
>>> Looking at the description of your system (Supermicro X7DCT) I see
>>> this board has IPMI option. Do you have IPMI in your system?
>>>
>>>
>>>
>> Aditional informations are in attached files.
>> Yes there is IPMI option but i don't have it.
>>
>> And yes i will make a bugreport at e1000.sf.net.
>>
> Thanks Pawel, for the quick reply. I see that you have disabled flow control. Is that on purpose? What is your link partner?
>
>
Yes i always disable flow control - link partner is 3Com Switch
> Thanks,
> Emil
>
>
>
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists