[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250723080532.53ecc4f1@kernel.org>
Date: Wed, 23 Jul 2025 08:05:32 -0700
From: Jakub Kicinski <kuba@...nel.org>
To: Jason Wang <jasowang@...hat.com>
Cc: Cindy Lu <lulu@...hat.com>, "K. Y. Srinivasan" <kys@...rosoft.com>,
Haiyang Zhang <haiyangz@...rosoft.com>, Wei Liu <wei.liu@...nel.org>,
Dexuan Cui <decui@...rosoft.com>, Andrew Lunn <andrew+netdev@...n.ch>,
"David S. Miller" <davem@...emloft.net>, Eric Dumazet
<edumazet@...gle.com>, Paolo Abeni <pabeni@...hat.com>, Simon Horman
<horms@...nel.org>, Michael Kelley <mhklinux@...look.com>, Shradha Gupta
<shradhagupta@...ux.microsoft.com>, Kees Cook <kees@...nel.org>, Stanislav
Fomichev <sdf@...ichev.me>, Kuniyuki Iwashima <kuniyu@...gle.com>,
Alexander Lobakin <aleksander.lobakin@...el.com>, Guillaume Nault
<gnault@...hat.com>, Joe Damato <jdamato@...tly.com>, Ahmed Zaki
<ahmed.zaki@...el.com>, "open list:Hyper-V/Azure CORE AND DRIVERS"
<linux-hyperv@...r.kernel.org>, "open list:NETWORKING DRIVERS"
<netdev@...r.kernel.org>, open list <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH RESEND] netvsc: transfer lower device max tso size
On Wed, 23 Jul 2025 14:00:47 +0800 Jason Wang wrote:
> > > But this fixes a real problem, otherwise nested VM performance will be
> > > broken due to the GSO software segmentation.
> >
> > Perhaps, possibly, a migration plan can be devised, away from the
> > netvsc model, so we don't have to deal with nuggets of joy like:
> > https://lore.kernel.org/all/1752870014-28909-1-git-send-email-haiyangz@linux.microsoft.com/
>
> Btw, if I understand this correctly. This is for future development so
> it's not a blocker for this patch?
Not a blocker, I'm just giving an example of the netvsc auto-weirdness
being a source of tech debt and bugs. Commit d7501e076d859d is another
recent one off the top of my head. IIUC systemd-networkd is broadly
deployed now. It'd be great if there was some migration plan for moving
this sort of VM auto-bonding to user space (with the use of the common
bonding driver, not each hypervisor rolling its own).
Powered by blists - more mailing lists