[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250106193248.GB18346@skinsburskii.>
Date: Mon, 6 Jan 2025 11:32:48 -0800
From: Stanislav Kinsburskii <skinsburskii@...ux.microsoft.com>
To: Roman Kisel <romank@...ux.microsoft.com>
Cc: hpa@...or.com, kys@...rosoft.com, bp@...en8.de,
dave.hansen@...ux.intel.com, decui@...rosoft.com,
eahariha@...ux.microsoft.com, haiyangz@...rosoft.com,
mingo@...hat.com, mhklinux@...look.com,
nunodasneves@...ux.microsoft.com, tglx@...utronix.de,
tiala@...rosoft.com, wei.liu@...nel.org,
linux-hyperv@...r.kernel.org, linux-kernel@...r.kernel.org,
x86@...nel.org, apais@...rosoft.com, benhill@...rosoft.com,
ssengar@...rosoft.com, sunilmut@...rosoft.com, vdso@...bites.dev
Subject: Re: [PATCH v5 3/5] hyperv: Enable the hypercall output page for the
VTL mode
On Mon, Jan 06, 2025 at 10:11:16AM -0800, Roman Kisel wrote:
>
>
> On 1/6/2025 9:11 AM, Stanislav Kinsburskii wrote:
> > On Fri, Jan 03, 2025 at 01:39:29PM -0800, Roman Kisel wrote:
> > >
>
> [...]
>
> > >
> >
> > The issue is that when you boot the same kernel in both VTL0 and VTL1+,
> > the pages will be allocated in any case (root or guest, VTL0 or VTL1+).
>
> I think we share we same beliefs: use common code as much as possible.
> Strategically, one day, there will be the kernel being able to boot
> (or at the very minimum share the Hyper-V code for) VTL0, VTL1 (LVBS)
> and VTL2 (OpenHCL). It is not today, though: VTL0 relies on ACPI|BIOS,
> VTL2 relies on DeviceTree, and VTL1 boot configuration comes off as
> a bit ad-hoc from my read of https://github.com/heki-linux/lvbs-linux,
> and working with the LVBS folks on debugging that.
>
> Can that day of the grand VTL code unification be tomorrow, or next
> week, or next month, or maybe next year, what is the option you leaning
> towards?
>
> To me, it seems, that's not even the next month. Let us take a look
> at how much ink is being spent to just fix a garden variety function.
> On the meta-level that might mean some _fundamental work_ is needed to
> provide a _robust foundation_ to built upon, such as removing the if
> statements and #ifdef's we're debating about to let the general case
> shine through.
>
> Tactically, imo, a staged approach might give more velocity and
> coverage instead of fixing the world in this small patch set. I would
> not want to increase the potential "blast radius" of the change.
> As it stands, it is pretty well-contained.
>
> All told, it might be prudent to focus on the task at hand - fix the
> function in question to enable building on that, e.g. proposing the v4
> of the ARM64 VTL mode patches, and more of what we have in
> https://github.com/microsoft/OHCL-Linux-Kernel.
>
> Once we take that small step to fix the hyperv-next tree, someone could
> propose removing the conditions for allocating the output page —-- or,
> perhaps, suggest an entirely new & vastly better solution to handling
> the hypercall output page. IMHO, that would enable adding features via
> relying on more generic code rather than further complicating the web
> of conditional statements and conditional compilation.
>
>From my POV a decision between a unified approach and interim solutions
in upstream should usually be resolved in favor of the former.
Given there are different stake holders in VTL code integration, I'd
suggest we step back a bit and think about how to proceed with the
overall design.
In my opinion, although I undestand why Underhill project decided to
come up with the original VTL kernels separation during build time, it's
time to reconsider this approach and to come up with a more generic
design, supporting booting the same kernel in different VTLs.
The major reason for this is that LVBS project relies on binary
compatibility of the kernels running in different VTLs.
The simplest way to provide such a guarantee in both development and
deployment is to run the same kernel in both VTLs.
Not having this ability will require carefull crafting of both the
kernels upon build, making kexec servicing of such kernels in production
complicated and error prone.
Thanks,
Stas
Powered by blists - more mailing lists