lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b1f5e997-033c-33ed-5e3b-6fe2632bf718@intel.com>
Date: Mon, 14 Apr 2025 15:38:39 +0300
From: "Lifshits, Vitaly" <vitaly.lifshits@...el.com>
To: Marek Marczykowski-Górecki
	<marmarek@...isiblethingslab.com>, Jesse Brandeburg
	<jesse.brandeburg@...el.com>, Tony Nguyen <anthony.l.nguyen@...el.com>,
	<netdev@...r.kernel.org>, <intel-wired-lan@...ts.osuosl.org>
CC: <regressions@...ts.linux.dev>, <stable@...r.kernel.org>, Sasha Levin
	<sashal@...nel.org>
Subject: Re: [REGRESSION] e1000e heavy packet loss on Meteor Lake - 6.14.2



On 4/14/2025 3:18 PM, Marek Marczykowski-Górecki wrote:
> Hi,
> 
> After updating to 6.14.2, the ethernet adapter is almost unusable, I get
> over 30% packet loss.
> Bisect says it's this commit:
> 
>      commit 85f6414167da39e0da30bf370f1ecda5a58c6f7b
>      Author: Vitaly Lifshits <vitaly.lifshits@...el.com>
>      Date:   Thu Mar 13 16:05:56 2025 +0200
> 
>          e1000e: change k1 configuration on MTP and later platforms
> 
> My system is Novacustom V540TU laptop with Intel Core Ultra 5 125H. And
> the e1000e driver is running in a Xen HVM (with PCI passthrough).
> Interestingly, I have also another one with Intel Core Ultra 7 155H
> where the issue does not happen. I don't see what is different about
> network adapter there, they look identical on lspci (but there are
> differences about other devices)...
> 
> I see the commit above was already backported to other stable branches
> too...
> 
> #regzbot introduced: 85f6414167da39e0da30bf370f1ecda5a58c6f7b
> 

Thank you for this report.

Do you see the high packet loss without the virtualization?
Can you please share the lspci output?
Does your switch/link partner support flow control? if it is 
configurable can you try to enable it?
Do you see any errors in dmesg related to the e1000e driver?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ