lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGtprH8TQ9ep5KQ5-U1PUBmzQQC7fBOLOfn2mNgnDLMO70ZYjg@mail.gmail.com>
Date:   Mon, 14 Nov 2022 17:53:12 -0800
From:   Vishal Annapurve <vannapurve@...gle.com>
To:     Peter Gonda <pgonda@...gle.com>
Cc:     x86@...nel.org, kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
        linux-kselftest@...r.kernel.org, pbonzini@...hat.com,
        vkuznets@...hat.com, wanpengli@...cent.com, jmattson@...gle.com,
        joro@...tes.org, tglx@...utronix.de, mingo@...hat.com,
        bp@...en8.de, dave.hansen@...ux.intel.com, hpa@...or.com,
        shuah@...nel.org, yang.zhong@...el.com, ricarkol@...gle.com,
        aaronlewis@...gle.com, wei.w.wang@...el.com,
        kirill.shutemov@...ux.intel.com, corbet@....net, hughd@...gle.com,
        jlayton@...nel.org, bfields@...ldses.org,
        akpm@...ux-foundation.org, chao.p.peng@...ux.intel.com,
        yu.c.zhang@...ux.intel.com, jun.nakajima@...el.com,
        dave.hansen@...el.com, michael.roth@....com, qperret@...gle.com,
        steven.price@....com, ak@...ux.intel.com, david@...hat.com,
        luto@...nel.org, vbabka@...e.cz, marcorr@...gle.com,
        erdemaktas@...gle.com, nikunj@....com, seanjc@...gle.com,
        diviness@...gle.com, maz@...nel.org, dmatlack@...gle.com,
        axelrasmussen@...gle.com, maciej.szmigiero@...cle.com,
        mizhang@...gle.com, bgardon@...gle.com, ackerleytng@...gle.com
Subject: Re: [V1 PATCH 4/6] KVM: selftests: x86: Execute VMs with private memory

On Mon, Nov 14, 2022 at 11:37 AM Peter Gonda <pgonda@...gle.com> wrote:
>...
> > +static void handle_vm_exit_map_gpa_hypercall(struct kvm_vm *vm,
> > +                               struct kvm_vcpu *vcpu)
> > +{
> > +       uint64_t gpa, npages, attrs, size;
> > +
> > +       TEST_ASSERT(vcpu->run->hypercall.nr == KVM_HC_MAP_GPA_RANGE,
> > +               "Unhandled Hypercall %lld\n", vcpu->run->hypercall.nr);
> > +       gpa = vcpu->run->hypercall.args[0];
> > +       npages = vcpu->run->hypercall.args[1];
> > +       size = npages << MIN_PAGE_SHIFT;
> > +       attrs = vcpu->run->hypercall.args[2];
> > +       pr_info("Explicit conversion off 0x%lx size 0x%lx to %s\n", gpa, size,
> > +               (attrs & KVM_MAP_GPA_RANGE_ENCRYPTED) ? "private" : "shared");
> > +
> > +       if (attrs & KVM_MAP_GPA_RANGE_ENCRYPTED)
> > +               vm_allocate_private_mem(vm, gpa, size);
> > +       else
> > +               vm_unback_private_mem(vm, gpa, size);
> > +
> > +       vcpu->run->hypercall.ret = 0;
> > +}
> > +
> > +static void vcpu_work(struct kvm_vm *vm, struct kvm_vcpu *vcpu,
> > +       struct vm_setup_info *info)
> > +{
> > +       struct ucall uc;
> > +       uint64_t cmd;
> > +
> > +       /*
> > +        * Loop until the guest is done.
> > +        */
> > +
> > +       while (true) {
> > +               vcpu_run(vcpu);
> > +
> > +               if (vcpu->run->exit_reason == KVM_EXIT_IO) {
> > +                       cmd = get_ucall(vcpu, &uc);
> > +                       if (cmd != UCALL_SYNC)
> > +                               break;
> > +
> > +                       TEST_ASSERT(info->ioexit_cb, "ioexit cb not present");
> > +                       info->ioexit_cb(vm, uc.args[1]);
> > +                       continue;
> > +               }
>
> Should this be integrated into the ucall library directly somehow?
> That way users of VMs with private memory do not need special
> handling?
>
> After Sean's series:
> https://lore.kernel.org/linux-arm-kernel/20220825232522.3997340-3-seanjc@google.com/
> we have a common get_ucall() that this check could be integrated into?
>
> > +
> > +               if (vcpu->run->exit_reason == KVM_EXIT_HYPERCALL) {
> > +                       handle_vm_exit_map_gpa_hypercall(vm, vcpu);
> > +                       continue;
> > +               }
> > +
> > +               TEST_FAIL("Unhandled VCPU exit reason %d\n",
> > +                       vcpu->run->exit_reason);
> > +               break;
> > +       }
> > +
> > +       if (vcpu->run->exit_reason == KVM_EXIT_IO && cmd == UCALL_ABORT)
> > +               TEST_FAIL("%s at %s:%ld, val = %lu", (const char *)uc.args[0],
> > +                         __FILE__, uc.args[1], uc.args[2]);
> > +}
> > +
> > +/*
> > + * Execute guest vm with private memory memslots.
> > + *
> > + * Input Args:
> > + *   info - pointer to a structure containing information about setting up a VM
> > + *     with private memslots
> > + *
> > + * Output Args: None
> > + *
> > + * Return: None
> > + *
> > + * Function called by host userspace logic in selftests to execute guest vm
> > + * logic. It will install test_mem_slot : containing the region of memory that
> > + * would be used to test private/shared memory accesses to a memory backed by
> > + * private memslots
> > + */
> > +void execute_vm_with_private_test_mem(struct vm_setup_info *info)
> > +{
> > +       struct kvm_vm *vm;
> > +       struct kvm_enable_cap cap;
> > +       struct kvm_vcpu *vcpu;
> > +       uint64_t test_area_gpa, test_area_size;
> > +       struct test_setup_info *test_info = &info->test_info;
> > +
> > +       TEST_ASSERT(info->guest_fn, "guest_fn not present");
> > +       vm = vm_create_with_one_vcpu(&vcpu, info->guest_fn);
>
> I am a little confused with how this library is going to work for SEV
> VMs that want to have UPM private memory eventually.
>
> Why should users of UPM be forced to use this very specific VM
> creation and vCPU run loop. In the patch
> https://lore.kernel.org/lkml/20220829171021.701198-1-pgonda@google.com/T/#m033ebc32df47a172bc6c46d4398b6c4387b7934d
> SEV VMs need to be created specially vm_sev_create_with_one_vcpu() but
> then callers can run the VM's vCPUs like other selftests.
>
> How do you see this working with SEV VMs?
>

This VM creation method can be useful to run the VMs whose execution
might call mapgpa to change the memory attributes. New VM creation
method specific to Sev VMs can be introduced.

I tried to reuse this framework earlier for Sev VM selftests via:
1) https://lore.kernel.org/lkml/20220830224259.412342-8-vannapurve@google.com/T/#m8164d3111c9a17ebab77f01635df8930207cc65d
2) https://lore.kernel.org/lkml/20220830224259.412342-8-vannapurve@google.com/T/#m8164d3111c9a17ebab77f01635df8930207cc65d

Though these changes need to be refreshed after this updated series.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ