[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <50da66d0-fe66-0563-4d34-7bd2e25695a4@intel.com>
Date: Mon, 14 Apr 2025 16:04:51 +0300
From: "Lifshits, Vitaly" <vitaly.lifshits@...el.com>
To: Marek Marczykowski-Górecki
<marmarek@...isiblethingslab.com>
CC: Jesse Brandeburg <jesse.brandeburg@...el.com>, Tony Nguyen
<anthony.l.nguyen@...el.com>, <netdev@...r.kernel.org>,
<intel-wired-lan@...ts.osuosl.org>, <regressions@...ts.linux.dev>,
<stable@...r.kernel.org>, Sasha Levin <sashal@...nel.org>
Subject: Re: [REGRESSION] e1000e heavy packet loss on Meteor Lake - 6.14.2
On 4/14/2025 3:58 PM, Marek Marczykowski-Górecki wrote:
> On Mon, Apr 14, 2025 at 03:38:39PM +0300, Lifshits, Vitaly wrote:
>> Do you see the high packet loss without the virtualization?
>
> I can't check that easily right now, will try later.
>
>> Can you please share the lspci output?
>
> Sure:
>
> 00:07.0 Ethernet controller [0200]: Intel Corporation Device [8086:550a] (rev 20)
> Subsystem: CLEVO/KAPOK Computer Device [1558:a743]
> Physical Slot: 7
> Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
> Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
> Latency: 64
> Interrupt: pin D routed to IRQ 69
> Region 0: Memory at f2000000 (32-bit, non-prefetchable) [size=128K]
> Capabilities: [c8] Power Management version 3
> Flags: PMEClk- DSI+ D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
> Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=1 PME-
> Capabilities: [d0] MSI: Enable+ Count=1/1 Maskable- 64bit+
> Address: 00000000fee00000 Data: 0000
> Kernel driver in use: e1000e
> Kernel modules: e1000e
>
Do you have mei modules running? Can you try if disabling them make
things better?
>
>
>> Does your switch/link partner support flow control? if it is configurable
>> can you try to enable it?
>
> It does support it. Enabling it makes things much worse...
>
>> Do you see any errors in dmesg related to the e1000e driver?
>
> Not really.
> dmesg | grep 'e1000e\|ens7':
>
> [ 3.088489] e1000e: Intel(R) PRO/1000 Network Driver
> [ 3.088512] e1000e: Copyright(c) 1999 - 2015 Intel Corporation.
> [ 3.093256] e1000e 0000:00:07.0: Interrupt Throttling Rate (ints/sec) set to dynamic conservative mode
> [ 3.343378] e1000e 0000:00:07.0 0000:00:07.0 (uninitialized): registered PHC clock
> [ 3.718946] e1000e 0000:00:07.0 eth0: (PCI Express:2.5GT/s:Width x1) d4:93:90:3e:0d:bb
> [ 3.718966] e1000e 0000:00:07.0 eth0: Intel(R) PRO/1000 Network Connection
> [ 3.719101] e1000e 0000:00:07.0 eth0: MAC: 16, PHY: 12, PBA No: FFFFFF-0FF
> [ 3.759444] e1000e 0000:00:07.0 ens7: renamed from eth0
> [ 8.632317] e1000e 0000:00:07.0 ens7: NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None
> [ 239.458205] e1000e 0000:00:07.0 ens7: NIC Link is Down
> [ 242.485869] e1000e 0000:00:07.0 ens7: NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx
>
> (you can also see a test with flow control above)
>
>
> And also ethtool output if useful:
> Settings for ens7:
> Supported ports: [ TP ]
> Supported link modes: 10baseT/Half 10baseT/Full
> 100baseT/Half 100baseT/Full
> 1000baseT/Full
> Supported pause frame use: Symmetric Receive-only
> Supports auto-negotiation: Yes
> Supported FEC modes: Not reported
> Advertised link modes: 10baseT/Half 10baseT/Full
> 100baseT/Half 100baseT/Full
> 1000baseT/Full
> Advertised pause frame use: Symmetric Receive-only
> Advertised auto-negotiation: Yes
> Advertised FEC modes: Not reported
> Link partner advertised link modes: 10baseT/Half 10baseT/Full
> 100baseT/Half 100baseT/Full
> 1000baseT/Full
> Link partner advertised pause frame use: No
> Link partner advertised auto-negotiation: Yes
> Link partner advertised FEC modes: Not reported
> Speed: 1000Mb/s
> Duplex: Full
> Auto-negotiation: on
> Port: Twisted Pair
> PHYAD: 1
> Transceiver: internal
> MDI-X: on (auto)
> Supports Wake-on: d
> Wake-on: d
> Current message level: 0x00000007 (7)
> drv probe link
> Link detected: yes
>
>
Powered by blists - more mailing lists