[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <55A4D823.2090900@huawei.com>
Date: Tue, 14 Jul 2015 17:36:35 +0800
From: Xishi Qiu <qiuxishi@...wei.com>
To: zhuyj <zyjzyj2000@...il.com>
CC: Linux MM <linux-mm@...ck.org>, LKML <linux-kernel@...r.kernel.org>,
guozhibin 00179312 <g00179312@...esmail.huawei.com.cn>,
<linux.nics@...el.com>,
"e1000-devel@...ts.sourceforge.net"
<e1000-devel@...ts.sourceforge.net>
Subject: Re: [E1000-devel] bad pages when up/down network cable
On 2015/7/14 17:24, Xishi Qiu wrote:
> On 2015/7/14 17:00, zhuyj wrote:
>
>> Do you use the default ixgbe driver? or the ixgbe driver is modified by you?
>>
>
> Yes,no modify.
>
Sorry, it is modified by us...
the driver come from intel,the info:
root:~ # ethtool -i p2p2
driver: ixgbe
version: 3.9.16-NAPI
firmware-version: 0x18f10001
bus-info: 0000:04:00.1
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
Thanks,
Xishi Qiu
>
>> On Tue, Jul 14, 2015 at 4:31 PM, Xishi Qiu <qiuxishi@...wei.com <mailto:qiuxishi@...wei.com>> wrote:
>>
>> 1、the host directly link to the storage device,by intel ixgbe NIC;
>> between them, no switch or router.
>> 2、the nic of the storage device suddenly become unused and then OK
>> after a little time, this happened frequency.
>> 3、the host printk a lot of message like these:
>>
>> The kernel is SUSE 3.0.13, use slab, and the following log shows the
>> page still have PG_slab when free_pages(). Does anyone have seen the
>> problem?
>>
>> Jul 9 11:31:36 root kernel: [1042291.977565] BUG: Bad page state in process swapper pfn:00bf2
>> Jul 9 11:31:36 root kernel: [1042291.977568] page:ffffea0000029cf0 count:0 mapcount:0 mapping: (null) index:0x7f6d4f500
>> Jul 9 11:31:36 root kernel: [1042291.977571] page flags: 0x40000000000100(slab) // here is the reason
>> Jul 9 11:31:36 root kernel: [1042291.977574] Pid: 0, comm: swapper Tainted: G B X 3.0.13-0.27-default #1
>> Jul 9 11:31:36 root kernel: [1042291.977577] Call Trace:
>> Jul 9 11:31:36 root kernel: [1042291.977583] [<ffffffff810048b5>] dump_trace+0x75/0x300
>> Jul 9 11:31:36 root kernel: [1042291.977639] [<ffffffff8143ea0f>] dump_stack+0x69/0x6f
>> Jul 9 11:31:36 root kernel: [1042291.977644] [<ffffffff810f53a1>] bad_page+0xb1/0x120
>> Jul 9 11:31:37 root kernel: [1042291.977649] [<ffffffff810f5926>] free_pages_prepare+0xe6/0x110
>> Jul 9 11:31:37 root kernel: [1042291.977654] [<ffffffff810f9259>] free_hot_cold_page+0x49/0x1f0
>> Jul 9 11:31:37 root kernel: [1042291.977660] [<ffffffff8137a3b4>] skb_release_data+0xb4/0xe0
>> Jul 9 11:31:37 root kernel: [1042291.977665] [<ffffffff81379e79>] __kfree_skb+0x9/0x90
>> Jul 9 11:31:37 root kernel: [1042291.977676] [<ffffffffa02784a9>] ixgbe_clean_tx_irq+0xa9/0x480 [ixgbe]
>> Jul 9 11:31:37 root kernel: [1042291.977693] [<ffffffffa02788cb>] ixgbe_poll+0x4b/0x1a0 [ixgbe]
>> Jul 9 11:31:37 root kernel: [1042291.977705] [<ffffffff81389c3a>] net_rx_action+0x10a/0x2c0
>> Jul 9 11:31:37 root kernel: [1042291.977711] [<ffffffff81060a1f>] __do_softirq+0xef/0x220
>> Jul 9 11:31:37 root kernel: [1042291.977716] [<ffffffff8144a8bc>] call_softirq+0x1c/0x30
>> Jul 9 11:31:37 root kernel: [1042291.978974] DWARF2 unwinder stuck at call_softirq+0x1c/0x30
>>
>> Thanks,
>> Xishi Qiu
>>
>>
>> ------------------------------------------------------------------------------
>> Don't Limit Your Business. Reach for the Cloud.
>> GigeNET's Cloud Solutions provide you with the tools and support that
>> you need to offload your IT needs and focus on growing your business.
>> Configured For All Businesses. Start Your Cloud Today.
>> https://www.gigenetcloud.com/
>> _______________________________________________
>> E1000-devel mailing list
>> E1000-devel@...ts.sourceforge.net <mailto:E1000-devel@...ts.sourceforge.net>
>> https://lists.sourceforge.net/lists/listinfo/e1000-devel
>> To learn more about Intel® Ethernet, visit http://communities.intel.com/community/wired
>>
>>
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists