lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 13 Jul 2018 09:46:50 -0700
From:   Sean Christopherson <sean.j.christopherson@...el.com>
To:     Vitaly Kuznetsov <vkuznets@...hat.com>
Cc:     kvm@...r.kernel.org, Paolo Bonzini <pbonzini@...hat.com>,
        Radim Krčmář <rkrcmar@...hat.com>,
        x86@...nel.org, Andy Lutomirski <luto@...nel.org>,
        "Dmitry V . Levin" <ldv@...linux.org>,
        Masatake YAMATO <yamato@...hat.com>,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH] x86/kvm/vmx: don't read current->thread.{fs,gs}base of
 legacy tasks

On Wed, Jul 11, 2018 at 07:37:18PM +0200, Vitaly Kuznetsov wrote:
> When we switched from doing rdmsr() to reading FS/GS base values from
> current->thread we completely forgot about legacy 32-bit userspaces which
> we still support in KVM (why?). task->thread.{fsbase,gsbase} are only
> synced for 64-bit processes, calling save_fsgs_for_kvm() and using
> its result from current is illegal for legacy processes.
> 
> There's no ARCH_SET_FS/GS prctls for legacy applications. Base MSRs are,
> however, not always equal to zero. Intel's manual says (3.4.4 Segment
> Loading Instructions in IA-32e Mode):
> 
> "In order to set up compatibility mode for an application, segment-load
> instructions (MOV to Sreg, POP Sreg) work normally in 64-bit mode. An
> entry is read from the system descriptor table (GDT or LDT) and is loaded
> in the hidden portion of the segment register.
> ...
> The hidden descriptor register fields for FS.base and GS.base are
> physically mapped to MSRs in order to load all address bits supported by
> a 64-bit implementation.
> "
> 
> The issue was found by strace test suite where 32-bit ioctl_kvm_run test
> started segfaulting.
> 
> Reported-by: Dmitry V. Levin <ldv@...linux.org>
> Bisected-by: Masatake YAMATO <yamato@...hat.com>
> Fixes: 42b933b59721 ("x86/kvm/vmx: read MSR_{FS,KERNEL_GS}_BASE from current->thread")
> Signed-off-by: Vitaly Kuznetsov <vkuznets@...hat.com>
> ---
>  arch/x86/kvm/vmx.c | 25 +++++++++++++++++--------
>  1 file changed, 17 insertions(+), 8 deletions(-)
> 
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index 559a12b6184d..65968649b365 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -2560,6 +2560,7 @@ static void vmx_save_host_state(struct kvm_vcpu *vcpu)
>  	struct vcpu_vmx *vmx = to_vmx(vcpu);
>  #ifdef CONFIG_X86_64
>  	int cpu = raw_smp_processor_id();
> +	unsigned long fsbase, kernel_gsbase;

Because bikeshedding is fun, what do you think about using fs_base and
kernel_gs_base for these names?  I have a series that touches this
code and also adds local variables for {FS,GS}.base and {FS,GS}.sel.
I used {fs,gs}_base and {fs,gs}_sel to be consistent with the
vmx->host_state nomenclature (the local variables are used to update
the associated vmx->host_state variables), but I'll change my patches
if you have a strong preference for omitting the underscore.

>  #endif
>  	int i;
>  
> @@ -2575,12 +2576,20 @@ static void vmx_save_host_state(struct kvm_vcpu *vcpu)
>  	vmx->host_state.gs_ldt_reload_needed = vmx->host_state.ldt_sel;
>  
>  #ifdef CONFIG_X86_64
> -	save_fsgs_for_kvm();
> -	vmx->host_state.fs_sel = current->thread.fsindex;
> -	vmx->host_state.gs_sel = current->thread.gsindex;
> -#else
> -	savesegment(fs, vmx->host_state.fs_sel);
> -	savesegment(gs, vmx->host_state.gs_sel);
> +	if (likely(is_64bit_mm(current->mm))) {
> +		save_fsgs_for_kvm();
> +		vmx->host_state.fs_sel = current->thread.fsindex;
> +		vmx->host_state.gs_sel = current->thread.gsindex;
> +		fsbase = current->thread.fsbase;
> +		kernel_gsbase = current->thread.gsbase;
> +	} else {
> +#endif
> +		savesegment(fs, vmx->host_state.fs_sel);
> +		savesegment(gs, vmx->host_state.gs_sel);
> +#ifdef CONFIG_X86_64
> +		fsbase = read_msr(MSR_FS_BASE);
> +		kernel_gsbase = read_msr(MSR_KERNEL_GS_BASE);
> +	}
>  #endif
>  	if (!(vmx->host_state.fs_sel & 7)) {
>  		vmcs_write16(HOST_FS_SELECTOR, vmx->host_state.fs_sel);
> @@ -2600,10 +2609,10 @@ static void vmx_save_host_state(struct kvm_vcpu *vcpu)
>  	savesegment(ds, vmx->host_state.ds_sel);
>  	savesegment(es, vmx->host_state.es_sel);
>  
> -	vmcs_writel(HOST_FS_BASE, current->thread.fsbase);
> +	vmcs_writel(HOST_FS_BASE, fsbase);
>  	vmcs_writel(HOST_GS_BASE, cpu_kernelmode_gs_base(cpu));
>  
> -	vmx->msr_host_kernel_gs_base = current->thread.gsbase;
> +	vmx->msr_host_kernel_gs_base = kernel_gsbase;
>  	if (is_long_mode(&vmx->vcpu))
>  		wrmsrl(MSR_KERNEL_GS_BASE, vmx->msr_guest_kernel_gs_base);
>  #else
> -- 
> 2.14.4
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ