[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZEqZ5w7EvzUc8Siv@google.com>
Date: Thu, 27 Apr 2023 08:51:03 -0700
From: Sean Christopherson <seanjc@...gle.com>
To: Vishal Annapurve <vannapurve@...gle.com>
Cc: Zhi Wang <zhi.wang.linux@...il.com>, isaku.yamahata@...el.com,
dmatlack@...gle.com, erdemaktas@...gle.com,
isaku.yamahata@...il.com, kai.huang@...el.com, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org, pbonzini@...hat.com, sagis@...gle.com
Subject: Re: [PATCH v13 098/113] KVM: TDX: Handle TDX PV map_gpa hypercall
On Wed, Apr 26, 2023, Vishal Annapurve wrote:
> On Wed, Apr 19, 2023 at 3:38 AM Zhi Wang <zhi.wang.linux@...il.com> wrote:
> >
> > On Tue, 18 Apr 2023 19:09:04 +0000
> > Vishal Annapurve <vannapurve@...gle.com> wrote:
> >
> > > > +static int tdx_map_gpa(struct kvm_vcpu *vcpu)
> > > > +{
> > > > + struct kvm *kvm = vcpu->kvm;
> > > > + gpa_t gpa = tdvmcall_a0_read(vcpu);
> > > > + gpa_t size = tdvmcall_a1_read(vcpu);
> > > > + gpa_t end = gpa + size;
> > > > +
> > > > + if (!IS_ALIGNED(gpa, PAGE_SIZE) || !IS_ALIGNED(size, PAGE_SIZE) ||
> > > > + end < gpa ||
> > > > + end > kvm_gfn_shared_mask(kvm) << (PAGE_SHIFT + 1) ||
> > > > + kvm_is_private_gpa(kvm, gpa) != kvm_is_private_gpa(kvm, end)) {
> > > > + tdvmcall_set_return_code(vcpu, TDG_VP_VMCALL_INVALID_OPERAND);
> > > > + return 1;
> > > > + }
> > > > +
> > > > + return tdx_vp_vmcall_to_user(vcpu);
> > >
> > > This will result into exits to userspace for MMIO regions as well. Does it make
> > > sense to only exit to userspace for guest physical memory regions backed by
> > > memslots?
No, KVM should exit always, e.g. userspace _could_ choose to create a private
memslot in response to the guest's request.
> > I think this is necessary as when passing a PCI device to a TD, the guest
> > needs to convert a MMIO region from private to shared, which is not backed
> > by memslots.
This isn't entirely accurate. If you're talking about emulated MMIO, then there
is no memslot. But the "passing a PCI device" makes it sound like you're talking
about device passthrough, in which case there is a memslot that points at an actual
MMIO region in the host platform.
In either case, conversions should be unnecessary as MMIO regions should not be
enumerated to the guest as supporting encryption, i.e. the guest should know from
time zero that those regions are shared. If we end up with something like Hyper-V's
SVSM-based paravisor, then there might be private emulated MMIO, but such a setup
would also come with its own brand of enlightment in the guest.
> KVM could internally handle conversion of regions not backed by
No, KVM should never internally handle conversions, at least not in the initial
implementation. And if KVM ever does go down this route, it needs dedicated
support in KVM's uAPI since userspace needs to be kept in the loop, i.e. needs
to opt-in and be notified of any conversions.
Powered by blists - more mailing lists