lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 5 Apr 2016 08:56:12 -0700
From:	David Matlack <dmatlack@...gle.com>
To:	Paolo Bonzini <pbonzini@...hat.com>
Cc:	kvm list <kvm@...r.kernel.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	Andy Lutomirski <luto@...capital.net>, stable@...r.kernel.org
Subject: Re: [PATCH] kvm: x86: do not leak guest xcr0 into host interrupt handlers

On Tue, Apr 5, 2016 at 4:28 AM, Paolo Bonzini <pbonzini@...hat.com> wrote:
>
...
>
> While running my acceptance tests, in one case I got one CPU whose xcr0
> had leaked into the host.  This showed up as a SIGILL in strncasecmp's
> AVX code, and a simple program confirmed it:
>
>     $ cat xgetbv.c
>     #include <stdio.h>
>     int main(void)
>     {
>         unsigned xcr0_h, xcr0_l;
>         asm("xgetbv" : "=d"(xcr0_h), "=a"(xcr0_l) : "c"(0));
>         printf("%08x:%08x\n", xcr0_h, xcr0_l);
>     }
>     $ gcc xgetbv.c -O2
>     $ for i in `seq 0 55`; do echo $i `taskset -c $i ./a.out`; done|grep -v 007
>     19 00000000:00000003
>
> I'm going to rerun the tests without this patch, as it seems the most
> likely culprit, and leave it out of the pull request if they pass.

Agreed this is a very likely culprit. I think I see one way the
guest's xcr0 can leak into the host. I will do some testing an send
another version. Thanks.

>
> Paolo
>
>> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
>> index e260ccb..8df1167 100644
>> --- a/arch/x86/kvm/x86.c
>> +++ b/arch/x86/kvm/x86.c
>> @@ -700,7 +700,6 @@ static int __kvm_set_xcr(struct kvm_vcpu *vcpu, u32 index, u64 xcr)
>>               if ((xcr0 & XFEATURE_MASK_AVX512) != XFEATURE_MASK_AVX512)
>>                       return 1;
>>       }
>> -     kvm_put_guest_xcr0(vcpu);
>>       vcpu->arch.xcr0 = xcr0;
>>
>>       if ((xcr0 ^ old_xcr0) & XFEATURE_MASK_EXTEND)
>> @@ -6590,8 +6589,6 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
>>       kvm_x86_ops->prepare_guest_switch(vcpu);
>>       if (vcpu->fpu_active)
>>               kvm_load_guest_fpu(vcpu);
>> -     kvm_load_guest_xcr0(vcpu);
>> -
>>       vcpu->mode = IN_GUEST_MODE;
>>
>>       srcu_read_unlock(&vcpu->kvm->srcu, vcpu->srcu_idx);
>> @@ -6607,6 +6604,8 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
>>
>>       local_irq_disable();
>>
>> +     kvm_load_guest_xcr0(vcpu);
>> +
>>       if (vcpu->mode == EXITING_GUEST_MODE || vcpu->requests
>>           || need_resched() || signal_pending(current)) {

Here, after we've loaded the guest xcr0, if we enter this if
statement, we return from vcpu_enter_guest with the guest's xcr0 still
loaded.

>>               vcpu->mode = OUTSIDE_GUEST_MODE;
>> @@ -6667,6 +6666,8 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
>>       vcpu->mode = OUTSIDE_GUEST_MODE;
>>       smp_wmb();
>>
>> +     kvm_put_guest_xcr0(vcpu);
>> +
>>       /* Interrupt is enabled by handle_external_intr() */
>>       kvm_x86_ops->handle_external_intr(vcpu);
>>
>> @@ -7314,7 +7315,6 @@ void kvm_load_guest_fpu(struct kvm_vcpu *vcpu)
>>        * and assume host would use all available bits.
>>        * Guest xcr0 would be loaded later.
>>        */
>> -     kvm_put_guest_xcr0(vcpu);
>>       vcpu->guest_fpu_loaded = 1;
>>       __kernel_fpu_begin();
>>       __copy_kernel_to_fpregs(&vcpu->arch.guest_fpu.state);
>> @@ -7323,8 +7323,6 @@ void kvm_load_guest_fpu(struct kvm_vcpu *vcpu)
>>
>>  void kvm_put_guest_fpu(struct kvm_vcpu *vcpu)
>>  {
>> -     kvm_put_guest_xcr0(vcpu);
>> -
>>       if (!vcpu->guest_fpu_loaded) {
>>               vcpu->fpu_counter = 0;
>>               return;
>>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ