lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20251003173025.GA1161403@liuwe-devbox-debian-v2.local>
Date: Fri, 3 Oct 2025 17:30:25 +0000
From: Wei Liu <wei.liu@...nel.org>
To: Stanislav Kinsburskii <skinsburskii@...ux.microsoft.com>
Cc: Nuno Das Neves <nunodasneves@...ux.microsoft.com>,
	linux-hyperv@...r.kernel.org, linux-kernel@...r.kernel.org,
	prapal@...ux.microsoft.com, easwar.hariharan@...ux.microsoft.com,
	tiala@...rosoft.com, anirudh@...rudhrb.com,
	paekkaladevi@...ux.microsoft.com, kys@...rosoft.com,
	haiyangz@...rosoft.com, wei.liu@...nel.org, decui@...rosoft.com
Subject: Re: [PATCH v4 0/5] mshv: Fixes for stats and vp state page mappings

On Fri, Oct 03, 2025 at 09:31:41AM -0700, Stanislav Kinsburskii wrote:
> On Mon, Sep 29, 2025 at 11:19:51AM -0700, Nuno Das Neves wrote:
> > On 9/26/2025 4:12 PM, Stanislav Kinsburskii wrote:
> > > On Fri, Sep 26, 2025 at 09:23:10AM -0700, Nuno Das Neves wrote:
> > >> There are some differences in how L1VH partitions must map stats and vp
> > >> state pages, some of which are due to differences across hypervisor
> > >> versions. Detect and handle these cases.
> > >>
> > > 
> > > I'm not sure that support for older and actully broken versions on
> > > hypervisor need to be usptreamed, as these versions will go away sooner
> > > or later and this support will become dead weight.
> > > 
> > As far as I know, these changes are relevant for shipped versions of the
> > hypervisor - they are not 'broken' except in some very specific cases
> > (live migration on L1VH, I think?)
> > 
> 
> I'm not sure I understand what "shipped version" of hypervisor actually
> is.
> As of today, the hypervisor is close source and the only product where
> it's used is Azure. In Azure, the older versions of hypervisor are
> replaced with newer on regular basis.
> 
> > The hypervisor team added a feature bit for these changes so that both old
> > and new versions of these APIs can be supported.
> > 
> > > I think we should upstrem only the changes needed for the new versiong
> > > of hypervisors instead and carry legacy support out of tree until it
> > > becomes obsoleted.
> > > 
> > Which version do you suggest to be the cutoff?
> > 
> > I'd prefer to support as many versions of the hypervisor as we can, as
> > long as they are at all relevant. We can remove the support later.
> > Removing prematurely just creates friction. Inevitably some users will
> > find themselves running on an older hypervisor and then it just fails
> > with a cryptic error. This includes myself, since I test L1VH on Azure
> > which typically has older hypervisor versions.
> > 
> 
> Given that these changes are expected to land to a newly released
> kernel, it will take time until this kernel gets to production. At that
> moment it's highly likley that the older versions of hypervisor you are
> trying to support here will be gone for good.

No. This is not 100% certain. The schedule is outside of our control.
Who knows if there is a specialized Azure cluster that is not updated
for some reason.

Interested parties have already started experimenting with this new
setup. I just point them to the upstream tree whenever I can.

> Even if they won't be gone, they will be obsoleted and intended to be
> replaced which effecitively makes this support of older versions a
> dead weight, which - if needed to be caried - is cleaner to keep in house
> and drop when apporiate than keeping in the upstream code base.
> 

While the maintenance burden argument generally applies, the burden in
this case is not big. It is totally fine to have the code.

Wei

> Thanks,
> Stas
> 
> > Nuno
> > 
> > > Thanks,
> > > Stanislav
> > > 
> > > 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ