[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y3Jt11kcj8lQ+NCN@google.com>
Date: Mon, 14 Nov 2022 16:33:27 +0000
From: Sean Christopherson <seanjc@...gle.com>
To: "Woodhouse, David" <dwmw@...zon.co.uk>
Cc: "pbonzini@...hat.com" <pbonzini@...hat.com>,
"mhal@...x.co" <mhal@...x.co>,
"kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"Durrant, Paul" <pdurrant@...zon.co.uk>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"Kaya, Metin" <metikaya@...zon.co.uk>
Subject: Re: [PATCH 03/16] KVM: x86: set gfn-to-pfn cache length consistently
with VM word size
On Mon, Nov 14, 2022, Woodhouse, David wrote:
> Most other data structures, including the pvclock info (both Xen and
> native KVM), could potentially cross page boundaries. And isn't that
> also true for things that we'd want to use the GPC for in nesting?
Off the top of my head, no. Except for MSR and I/O permission bitmaps, which
are >4KiB, things that are referenced by physical address are <=4KiB and must be
naturally aligned. nVMX does temporarily map L1's MSR bitmap, but that could be
split into two separate mappings if necessary.
> For the runstate info I suggested reverting commit a795cd43c5b5 but
> that doesn't actually work because it still has the same problem. Even
> the gfn-to-hva cache still only really works for a single page, and
> things like kvm_write_guest_offset_cached() will fall back to using
> kvm_write_guest() in the case where it crosses a page boundary.
>
> I'm wondering if the better fix is to allow the GPC to map more than
> one page.
I agree that KVM should drop the "no page splits" restriction, but I don't think
that would necessarily solve all KVM Xen issues. KVM still needs to precisely
handle the "correct" struct size, e.g. if one of the structs is placed at the very
end of the page such that the smaller compat version doesn't split a page but the
64-bit version does.
Powered by blists - more mailing lists