[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87d0f6yzd3.fsf@vitty.brq.redhat.com>
Date: Wed, 09 Oct 2019 12:42:00 +0200
From: Vitaly Kuznetsov <vkuznets@...hat.com>
To: Paolo Bonzini <pbonzini@...hat.com>,
Sean Christopherson <sean.j.christopherson@...el.com>
Cc: kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
Jim Mattson <jmattson@...gle.com>
Subject: Re: [PATCH] selftests: kvm: fix sync_regs_test with newer gccs
Paolo Bonzini <pbonzini@...hat.com> writes:
> On 08/10/19 20:36, Sean Christopherson wrote:
>> On Tue, Oct 08, 2019 at 08:08:08PM +0200, Vitaly Kuznetsov wrote:
>>> Commit 204c91eff798a ("KVM: selftests: do not blindly clobber registers in
>>> guest asm") was intended to make test more gcc-proof, however, the result
>>> is exactly the opposite: on newer gccs (e.g. 8.2.1) the test breaks with
>>>
>>> ==== Test Assertion Failure ====
>>> x86_64/sync_regs_test.c:168: run->s.regs.regs.rbx == 0xBAD1DEA + 1
>>> pid=14170 tid=14170 - Invalid argument
>>> 1 0x00000000004015b3: main at sync_regs_test.c:166 (discriminator 6)
>>> 2 0x00007f413fb66412: ?? ??:0
>>> 3 0x000000000040191d: _start at ??:?
>>> rbx sync regs value incorrect 0x1.
>>>
>>> Apparently, compile is still free to play games with registers even
>>> when they have variables attaches.
>>>
>>> Re-write guest code with 'asm volatile' by embedding ucall there and
>>> making sure rbx is preserved.
>>>
>>> Fixes: 204c91eff798a ("KVM: selftests: do not blindly clobber registers in guest asm")
>>> Signed-off-by: Vitaly Kuznetsov <vkuznets@...hat.com>
>>> ---
>>> .../selftests/kvm/x86_64/sync_regs_test.c | 21 ++++++++++---------
>>> 1 file changed, 11 insertions(+), 10 deletions(-)
>>>
>>> diff --git a/tools/testing/selftests/kvm/x86_64/sync_regs_test.c b/tools/testing/selftests/kvm/x86_64/sync_regs_test.c
>>> index 11c2a70a7b87..5c8224256294 100644
>>> --- a/tools/testing/selftests/kvm/x86_64/sync_regs_test.c
>>> +++ b/tools/testing/selftests/kvm/x86_64/sync_regs_test.c
>>> @@ -22,18 +22,19 @@
>>>
>>> #define VCPU_ID 5
>>>
>>> +#define UCALL_PIO_PORT ((uint16_t)0x1000)
>>> +
>>> +/*
>>> + * ucall is embedded here to protect against compiler reshuffling registers
>>> + * before calling a function. In this test we only need to get KVM_EXIT_IO
>>> + * vmexit and preserve RBX, no additional information is needed.
>>> + */
>>> void guest_code(void)
>>> {
>>> - /*
>>> - * use a callee-save register, otherwise the compiler
>>> - * saves it around the call to GUEST_SYNC.
>>> - */
>>> - register u32 stage asm("rbx");
>>> - for (;;) {
>>> - GUEST_SYNC(0);
>>> - stage++;
>>> - asm volatile ("" : : "r" (stage));
>>> - }
>>> + asm volatile("1: in %[port], %%al\n"
>>> + "add $0x1, %%rbx\n"
>>> + "jmp 1b"
>>> + : : [port] "d" (UCALL_PIO_PORT) : "rax", "rbx");
>>> }
>>
>> To make the code truly bulletproof, is it possible to rename guest_code()
>> to guest_code_wrapper() and then export 1: as guest_code? VM-Enter will
>> jump directly to the relevant code and gcc can't touch rbx. E.g.:
>>
>> asm volatile("1: ..."
>> ".global guest_code"
>> "guest_code: " _ASM_PTR " 1b");
>>
>> Not sure if that works with how the selftests are compiled. It may also
>> be possible to simply replace '1' with 'guest_code'.
>
> There is no practical difference with Vitaly's patch. The first
> _vcpu_run has no pre-/post-conditions on the value of %rbx:
>
I think what Sean was suggesting is to prevent GCC from inserting
anything (and thus clobbering RBX) between the call to guest_call() and
the beginning of 'asm volatile' block by calling *inside* 'asm volatile'
block instead.
>
> run->kvm_valid_regs = TEST_SYNC_FIELDS;
> rv = _vcpu_run(vm, VCPU_ID);
> TEST_ASSERT(run->exit_reason == KVM_EXIT_IO,
> "Unexpected exit reason: %u (%s),\n",
> run->exit_reason,
> exit_reason_str(run->exit_reason));
>
> /*
> * Then it goes on comparing regs/sregs/events, but does not
> * check for specific values.
> */
>
> As soon as that first _vcpu_run succeeds, you're stuck in the in/add/jmp
> loop and the compiler can't trick you anymore.
>
> So, I'm queuing the patch.
>
Thanks!
--
Vitaly
Powered by blists - more mailing lists