[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAMkAt6oGtMukLM223x5Lj3A1DpQ=4ZxZ8U8L72=eKuPcjZKSbQ@mail.gmail.com>
Date: Mon, 29 Aug 2022 09:45:57 -0600
From: Peter Gonda <pgonda@...gle.com>
To: Sean Christopherson <seanjc@...gle.com>
Cc: kvm list <kvm@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>,
Marc Orr <marcorr@...gle.com>,
Michael Roth <michael.roth@....com>,
"Lendacky, Thomas" <thomas.lendacky@....com>,
Joerg Roedel <joro@...tes.org>,
Mingwei Zhang <mizhang@...gle.com>,
Paolo Bonzini <pbonzini@...hat.com>,
Andrew Jones <andrew.jones@...ux.dev>,
Vishal Annapurve <vannapurve@...gle.com>
Subject: Re: [V3 08/11] KVM: selftests: add library for creating/interacting
with SEV guests
On Wed, Aug 17, 2022 at 6:33 PM Sean Christopherson <seanjc@...gle.com> wrote:
>
> On Wed, Aug 10, 2022, Peter Gonda wrote:
> > +enum {
> > + SEV_GSTATE_UNINIT = 0,
> > + SEV_GSTATE_LUPDATE,
> > + SEV_GSTATE_LSECRET,
> > + SEV_GSTATE_RUNNING,
> > +};
> > +
>
> Name the enum, e.g. enum sev_guest_state?
Done
>
> And s/GSTATE/GUEST? Ugh, AMD's documentation uses GSTATE.
>
> But looking at the docs, I only see GSTATE_LAUNCH? Or does SEV have different
> status codes than -ES and/or -SNP?
SEV and SEV-ES have the same states, SNP has a different set.
See "Table 43. GSTATE Finite State Machine":
https://www.amd.com/system/files/TechDocs/55766_SEV-KM_API_Specification.pdf
>
> > +struct kvm_vm *sev_get_vm(struct sev_vm *sev)
> > +{
> > + return sev->vm;
>
> Why bother with a wrapper?
>
> > +}
> > +
> > +uint8_t sev_get_enc_bit(struct sev_vm *sev)
> > +{
>
> Same here, IMO it just obfuscates code with no real benefit. ANd it's inconsistent,
> e.g. why have a wrapper for enc_bit but not sev->fd?
>
Removed.
> > + return sev->enc_bit;
> > +}
> > +
> > +void sev_ioctl(int sev_fd, int cmd, void *data)
> > +{
> > + int ret;
> > + struct sev_issue_cmd arg;
> > +
> > + arg.cmd = cmd;
> > + arg.data = (unsigned long)data;
> > + ret = ioctl(sev_fd, SEV_ISSUE_CMD, &arg);
> > + TEST_ASSERT(ret == 0,
> > + "SEV ioctl %d failed, error: %d, fw_error: %d",
> > + cmd, ret, arg.error);
> > +}
> > +
> > +void kvm_sev_ioctl(struct sev_vm *sev, int cmd, void *data)
> > +{
> > + struct kvm_sev_cmd arg = {0};
> > + int ret;
> > +
> > + arg.id = cmd;
> > + arg.sev_fd = sev->fd;
> > + arg.data = (__u64)data;
> > +
> > + ret = ioctl(sev->vm->fd, KVM_MEMORY_ENCRYPT_OP, &arg);
> > + TEST_ASSERT(ret == 0,
> > + "SEV KVM ioctl %d failed, rc: %i errno: %i (%s), fw_error: %d",
> > + cmd, ret, errno, strerror(errno), arg.error);
> > +}
> > +
> > +/* Local helpers. */
> > +
> > +static void
>
> Don't split here, e.g. a grep/search for the function, should also show the return
> type and any attributes, e.g. "static" vs. something else is typically much more
> interesting than the parameters (and parameters is not a fully solvable problem).
>
> > +sev_register_user_region(struct sev_vm *sev, void *hva, uint64_t size)
>
> Align like so:
>
> static void sev_register_user_region(struct sev_vm *sev, void *hva,
> uint64_t size)
>
> or maybe even let it poke out.
Ah never thought about that, sounds good. Done.
>
> > +{
> > + struct kvm_enc_region range = {0};
> > + int ret;
> > +
> > + pr_debug("%s: hva: %p, size: %lu\n", __func__, hva, size);
> > +
> > + range.addr = (__u64)hva;
> > + range.size = size;
> > +
> > + ret = ioctl(sev->vm->fd, KVM_MEMORY_ENCRYPT_REG_REGION, &range);
> > + TEST_ASSERT(ret == 0, "failed to register user range, errno: %i\n", errno);
> > +}
> > +
> > +static void
> > +sev_encrypt_phy_range(struct sev_vm *sev, vm_paddr_t gpa, uint64_t size)
>
> Same thing here.
>
> > +{
> > + struct kvm_sev_launch_update_data ksev_update_data = {0};
> > +
> > + pr_debug("%s: addr: 0x%lx, size: %lu\n", __func__, gpa, size);
> > +
> > + ksev_update_data.uaddr = (__u64)addr_gpa2hva(sev->vm, gpa);
> > + ksev_update_data.len = size;
> > +
> > + kvm_sev_ioctl(sev, KVM_SEV_LAUNCH_UPDATE_DATA, &ksev_update_data);
> > +}
> > +
> > +static void sev_encrypt(struct sev_vm *sev)
> > +{
> > + const struct sparsebit *enc_phy_pages;
> > + struct kvm_vm *vm = sev->vm;
> > + sparsebit_idx_t pg = 0;
> > + vm_paddr_t gpa_start;
> > + uint64_t memory_size;
> > +
> > + /* Only memslot 0 supported for now. */
>
> Eww. Haven't looked at this in depth, but is there a way to avoid hardcoding the
> memslot in this code?
Done.
>
> > +void sev_vm_launch(struct sev_vm *sev)
> > +{
> > + struct kvm_sev_launch_start ksev_launch_start = {0};
> > + struct kvm_sev_guest_status ksev_status = {0};
>
> Doesn't " = {};" do the same thing? And for the status, and any other cases where
> userspace is reading, wouldn't it be better from a test coverage perspective to
> _not_ zero the data? Hmm, though I suppose false passes are possible in that case...
We want to zero out kvm_sev_launch_start because it has main fields we
explicitly want zero: the dh_* and session_* fields need to be set up
correctly or zeroed. We can add a test for these later if we want, but
it would be mostly testing userspace + the PSP.
We can not zero status I agree.
>
> > + /* Need to use ucall_shared for synchronization. */
> > + //ucall_init_ops(sev_get_vm(sev), NULL, &ucall_ops_halt);
>
> Can this be deleted? If not, what's up?
Whoops, removed.
Powered by blists - more mailing lists