[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZBio2Cs7UrkkilTc@google.com>
Date: Mon, 20 Mar 2023 11:41:28 -0700
From: Sean Christopherson <seanjc@...gle.com>
To: Ben Gardon <bgardon@...gle.com>
Cc: linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
Paolo Bonzini <pbonzini@...hat.com>,
Peter Xu <peterx@...hat.com>,
David Matlack <dmatlack@...gle.com>,
Vipin Sharma <vipinsh@...gle.com>,
Ricardo Koller <ricarkol@...gle.com>
Subject: Re: [PATCH 09/21] KVM: x86/MMU: Move paging_tmpl.h includes to shadow_mmu.c
First off, I apologize for not giving this feedback in the RFC. I didn't think
too hard about the impliciations of moving paging_tmpl.h until I actually looked
at the code.
On Thu, Feb 02, 2023, Ben Gardon wrote:
> Move the integration point for paging_tmpl.h to shadow_mmu.c since
> paging_tmpl.h is ostensibly part of the Shadow MMU.
Ostensibly indeed. While a simple majority of paging_tmpl.h is indeed unique to
the shadow MMU, all of the guest walker code needs to exist independent of the
shadow MMU. And that code is signficant both in terms of lines of code, and
more importantly in terms of understanding its role in KVM at large.
This is essentially the same mess that eventually led the cpu_role vs. root_role
cleanup, and I think we should figure out a way to give paging_tmpl.h similar
treatment. E.g. split paging_tmpl.h itself in some way.
Unfortunately, this is a sticking point for me. If the code movement were minor
and/or cleaner in nature (definitely not your fault, simply the reality of the
code base), I might feel differently. But as it stands, there is a lot of churn
to get to an endpoint that has significant flaws.
So while I love the idea of separating the MMU implementations from the common
MMU logic, because the guest walker stuff is a lynchpin of sorts, e.g. splitting
out the guest walker logic could go hand-in-hand with reworking guest_mmu, I don't
want to take this series as is.
Sadly, as much as I'm itching to dive in and do a bit of exploration, I am woefully
short on bandwidth right now, so all I can do is say no. Sorry :-(
Powered by blists - more mailing lists