[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZefIm+plPAzUww51@yzhao56-desk.sh.intel.com>
Date: Wed, 6 Mar 2024 09:36:27 +0800
From: Yan Zhao <yan.y.zhao@...el.com>
To: Sagi Shahar <sagis@...gle.com>, <linux-kselftest@...r.kernel.org>,
Ackerley Tng <ackerleytng@...gle.com>, Ryan Afranji <afranji@...gle.com>,
Erdem Aktas <erdemaktas@...gle.com>, Isaku Yamahata
<isaku.yamahata@...el.com>, Sean Christopherson <seanjc@...gle.com>, "Paolo
Bonzini" <pbonzini@...hat.com>, Shuah Khan <shuah@...nel.org>, Peter Gonda
<pgonda@...gle.com>, Haibo Xu <haibo1.xu@...el.com>, Chao Peng
<chao.p.peng@...ux.intel.com>, Vishal Annapurve <vannapurve@...gle.com>,
Roger Wang <runanwang@...gle.com>, Vipin Sharma <vipinsh@...gle.com>,
<jmattson@...gle.com>, <dmatlack@...gle.com>, <linux-kernel@...r.kernel.org>,
<kvm@...r.kernel.org>, <linux-mm@...ck.org>
Subject: Re: [RFC PATCH v5 23/29] KVM: selftests: TDX: Add shared memory test
On Fri, Mar 01, 2024 at 08:02:43PM +0800, Yan Zhao wrote:
> > +void guest_shared_mem(void)
> > +{
> > + uint32_t *test_mem_shared_gva =
> > + (uint32_t *)TDX_SHARED_MEM_TEST_SHARED_GVA;
> > +
> > + uint64_t placeholder;
> > + uint64_t ret;
> > +
> > + /* Map gpa as shared */
> > + ret = tdg_vp_vmcall_map_gpa(test_mem_shared_gpa, PAGE_SIZE,
> > + &placeholder);
> > + if (ret)
> > + tdx_test_fatal_with_data(ret, __LINE__);
> > +
> > + *test_mem_shared_gva = TDX_SHARED_MEM_TEST_GUEST_WRITE_VALUE;
> > +
> > + /* Exit so host can read shared value */
> > + ret = tdg_vp_vmcall_instruction_io(TDX_SHARED_MEM_TEST_INFO_PORT, 4,
> > + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE,
> > + &placeholder);
> > + if (ret)
> > + tdx_test_fatal_with_data(ret, __LINE__);
> > +
> > + /* Read value written by host and send it back out for verification */
> > + ret = tdg_vp_vmcall_instruction_io(TDX_SHARED_MEM_TEST_INFO_PORT, 4,
> > + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE,
> > + (uint64_t *)test_mem_shared_gva);
> > + if (ret)
> > + tdx_test_fatal_with_data(ret, __LINE__);
> > +}
> > +
> > +int verify_shared_mem(void)
> > +{
> > + struct kvm_vm *vm;
> > + struct kvm_vcpu *vcpu;
> > +
> > + vm_vaddr_t test_mem_private_gva;
> > + uint32_t *test_mem_hva;
> > +
> > + vm = td_create();
> > + td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0);
> > + vcpu = td_vcpu_add(vm, 0, guest_shared_mem);
> > +
> > + /*
> > + * Set up shared memory page for testing by first allocating as private
> > + * and then mapping the same GPA again as shared. This way, the TD does
> > + * not have to remap its page tables at runtime.
> > + */
> > + test_mem_private_gva = vm_vaddr_alloc(vm, vm->page_size,
> > + TDX_SHARED_MEM_TEST_PRIVATE_GVA);
> > + TEST_ASSERT_EQ(test_mem_private_gva, TDX_SHARED_MEM_TEST_PRIVATE_GVA);
> > +
> > + test_mem_hva = addr_gva2hva(vm, test_mem_private_gva);
> > + TEST_ASSERT(test_mem_hva != NULL,
> > + "Guest address not found in guest memory regions\n");
> > +
> > + test_mem_private_gpa = addr_gva2gpa(vm, test_mem_private_gva);
> > + virt_pg_map_shared(vm, TDX_SHARED_MEM_TEST_SHARED_GVA,
> > + test_mem_private_gpa);
> > +
> > + test_mem_shared_gpa = test_mem_private_gpa | BIT_ULL(vm->pa_bits - 1);
> > + sync_global_to_guest(vm, test_mem_private_gpa);
> > + sync_global_to_guest(vm, test_mem_shared_gpa);
> > +
> > + td_finalize(vm);
> > +
> > + printf("Verifying shared memory accesses for TDX\n");
> > +
> > + /* Begin guest execution; guest writes to shared memory. */
> > + printf("\t ... Starting guest execution\n");
> > +
> > + /* Handle map gpa as shared */
> > + td_vcpu_run(vcpu);
> > + TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
> The first VMExit should be caused by tdvmcall map gpa, so it's
> impossible to be guest failure.
>
Ah, if KVM has bugs and returns to guest's map gpa tdvmcall as error without
exiting to user space, then it's possible to meet guest failure here.
> Move this line TDX_TEST_CHECK_GUEST_FAILURE(vcpu) to after the next td_vcpu_run()
> is better.
So, looks it's required to be checked after every vcpu run.
Without checking it (as in below), the selftest will not be able to print out
the guest reported fatal error.
> > +
> > + td_vcpu_run(vcpu);
> > + TDX_TEST_ASSERT_IO(vcpu, TDX_SHARED_MEM_TEST_INFO_PORT, 4,
> > + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE);
> > + TEST_ASSERT_EQ(*test_mem_hva, TDX_SHARED_MEM_TEST_GUEST_WRITE_VALUE);
> > +
> > + *test_mem_hva = TDX_SHARED_MEM_TEST_HOST_WRITE_VALUE;
> > + td_vcpu_run(vcpu);
> > + TDX_TEST_ASSERT_IO(vcpu, TDX_SHARED_MEM_TEST_INFO_PORT, 4,
> > + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE);
> > + TEST_ASSERT_EQ(
> > + *(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset),
> > + TDX_SHARED_MEM_TEST_HOST_WRITE_VALUE);
> > +
> > + printf("\t ... PASSED\n");
> > +
> > + kvm_vm_free(vm);
> > +
> > + return 0;
> > +}
>
>
Powered by blists - more mailing lists