lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANRm+CxKWuuzyu4Yi-phUR3RpVb0bx_=r3FoJvZMm1ii-8-wKw@mail.gmail.com>
Date:   Thu, 12 Jul 2018 09:39:09 +0800
From:   Wanpeng Li <kernellwp@...il.com>
To:     Vitaly Kuznetsov <vkuznets@...hat.com>
Cc:     kvm <kvm@...r.kernel.org>, Paolo Bonzini <pbonzini@...hat.com>,
        Radim Krcmar <rkrcmar@...hat.com>,
        "the arch/x86 maintainers" <x86@...nel.org>,
        Andy Lutomirski <luto@...nel.org>, ldv@...linux.org,
        yamato@...hat.com, LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] x86/kvm/vmx: don't read current->thread.{fs,gs}base of
 legacy tasks

On Thu, 12 Jul 2018 at 08:07, Vitaly Kuznetsov <vkuznets@...hat.com> wrote:
>
> When we switched from doing rdmsr() to reading FS/GS base values from
> current->thread we completely forgot about legacy 32-bit userspaces which
> we still support in KVM (why?). task->thread.{fsbase,gsbase} are only
> synced for 64-bit processes, calling save_fsgs_for_kvm() and using
> its result from current is illegal for legacy processes.
>
> There's no ARCH_SET_FS/GS prctls for legacy applications. Base MSRs are,
> however, not always equal to zero. Intel's manual says (3.4.4 Segment
> Loading Instructions in IA-32e Mode):
>
> "In order to set up compatibility mode for an application, segment-load
> instructions (MOV to Sreg, POP Sreg) work normally in 64-bit mode. An
> entry is read from the system descriptor table (GDT or LDT) and is loaded
> in the hidden portion of the segment register.
> ...
> The hidden descriptor register fields for FS.base and GS.base are
> physically mapped to MSRs in order to load all address bits supported by
> a 64-bit implementation.
> "
>
> The issue was found by strace test suite where 32-bit ioctl_kvm_run test
> started segfaulting.

Test suite: MSR switch
PASS: VM entry MSR load
PASS: VM exit MSR store
PASS: VM exit MSR load
FAIL: VM entry MSR load: try to load FS_BASE
SUMMARY: 4 tests, 1 unexpected failures

kvm-unit-tests fails w/ and w/o the patch, maybe it is another issue,
i didn't dig further, you can have a look if you are interested in. :)

Regards,
Wanpeng Li

>
> Reported-by: Dmitry V. Levin <ldv@...linux.org>
> Bisected-by: Masatake YAMATO <yamato@...hat.com>
> Fixes: 42b933b59721 ("x86/kvm/vmx: read MSR_{FS,KERNEL_GS}_BASE from current->thread")
> Signed-off-by: Vitaly Kuznetsov <vkuznets@...hat.com>
> ---
>  arch/x86/kvm/vmx.c | 25 +++++++++++++++++--------
>  1 file changed, 17 insertions(+), 8 deletions(-)
>
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index 559a12b6184d..65968649b365 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -2560,6 +2560,7 @@ static void vmx_save_host_state(struct kvm_vcpu *vcpu)
>         struct vcpu_vmx *vmx = to_vmx(vcpu);
>  #ifdef CONFIG_X86_64
>         int cpu = raw_smp_processor_id();
> +       unsigned long fsbase, kernel_gsbase;
>  #endif
>         int i;
>
> @@ -2575,12 +2576,20 @@ static void vmx_save_host_state(struct kvm_vcpu *vcpu)
>         vmx->host_state.gs_ldt_reload_needed = vmx->host_state.ldt_sel;
>
>  #ifdef CONFIG_X86_64
> -       save_fsgs_for_kvm();
> -       vmx->host_state.fs_sel = current->thread.fsindex;
> -       vmx->host_state.gs_sel = current->thread.gsindex;
> -#else
> -       savesegment(fs, vmx->host_state.fs_sel);
> -       savesegment(gs, vmx->host_state.gs_sel);
> +       if (likely(is_64bit_mm(current->mm))) {
> +               save_fsgs_for_kvm();
> +               vmx->host_state.fs_sel = current->thread.fsindex;
> +               vmx->host_state.gs_sel = current->thread.gsindex;
> +               fsbase = current->thread.fsbase;
> +               kernel_gsbase = current->thread.gsbase;
> +       } else {
> +#endif
> +               savesegment(fs, vmx->host_state.fs_sel);
> +               savesegment(gs, vmx->host_state.gs_sel);
> +#ifdef CONFIG_X86_64
> +               fsbase = read_msr(MSR_FS_BASE);
> +               kernel_gsbase = read_msr(MSR_KERNEL_GS_BASE);
> +       }
>  #endif
>         if (!(vmx->host_state.fs_sel & 7)) {
>                 vmcs_write16(HOST_FS_SELECTOR, vmx->host_state.fs_sel);
> @@ -2600,10 +2609,10 @@ static void vmx_save_host_state(struct kvm_vcpu *vcpu)
>         savesegment(ds, vmx->host_state.ds_sel);
>         savesegment(es, vmx->host_state.es_sel);
>
> -       vmcs_writel(HOST_FS_BASE, current->thread.fsbase);
> +       vmcs_writel(HOST_FS_BASE, fsbase);
>         vmcs_writel(HOST_GS_BASE, cpu_kernelmode_gs_base(cpu));
>
> -       vmx->msr_host_kernel_gs_base = current->thread.gsbase;
> +       vmx->msr_host_kernel_gs_base = kernel_gsbase;
>         if (is_long_mode(&vmx->vcpu))
>                 wrmsrl(MSR_KERNEL_GS_BASE, vmx->msr_guest_kernel_gs_base);
>  #else
> --
> 2.14.4
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ