lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 25 Nov 2016 18:21:11 +0100
From:   Paolo Bonzini <pbonzini@...hat.com>
To:     Radim Krčmář <rkrcmar@...hat.com>
Cc:     linux-kernel@...r.kernel.org, kvm@...r.kernel.org
Subject: Re: [PATCH] KVM: x86: restrict maximal physical address



On 25/11/2016 17:57, Radim Krčmář wrote:
> 2016-11-25 17:10+0100, Paolo Bonzini:
>> On 25/11/2016 15:51, Radim Krčmář wrote:
>>> The guest could have configured a maximal physical address that exceeds
>>> the host.  Prevent that situation as it could easily lead to a bug.
>>>
>>> Signed-off-by: Radim Krčmář <rkrcmar@...hat.com>
>>> ---
>>>  arch/x86/kvm/cpuid.c | 8 +++++++-
>>>  1 file changed, 7 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
>>> index 25f0f15fab1a..aed910e9fbed 100644
>>> --- a/arch/x86/kvm/cpuid.c
>>> +++ b/arch/x86/kvm/cpuid.c
>>> @@ -136,7 +136,13 @@ int kvm_update_cpuid(struct kvm_vcpu *vcpu)
>>>  		((best->eax & 0xff00) >> 8) != 0)
>>>  		return -EINVAL;
>>>  
>>> -	/* Update physical-address width */
>>> +
>>> +	/*
>>> +	 * Update physical-address width.
>>> +	 * Make sure that it does not exceed hardware capabilities.
>>> +	 */
>>> +	if (cpuid_query_maxphyaddr(vcpu) > boot_cpu_data.x86_phys_bits)
>>> +		return -EINVAL;
>>>  	vcpu->arch.maxphyaddr = cpuid_query_maxphyaddr(vcpu);
>>>  
>>>  	kvm_pmu_refresh(vcpu);
>>>
>>
>> Not possible unfortunately, this would break most versions of QEMU that
>> hard-code 40 for MAXPHYADDR.
>>
>> Also, "wider" physical addresses in the guest are actually possible with
>> shadow paging.
> 
> We don't disable EPT in that case, though.  I guess that situations
> where QEMU configures mem slot into high physical addresses are not hit
> in production ...
> 
> Is any solution better than ignoring this situation?

We've hit it at least once a year (I can remember me, Nadav, Eduardo, 
Dave Gilbert and you) and always decided that ultimately there is no 
satisfactory solution.

Both GAW < HAW (AW = address width) and GAW > HAW have problems.  If 
GAW is smaller, bits that ought to be reserved aren't.  This is 
arguably worse than configuring memory into addresses above GAW.  
However most Intel chips in the wild have 36 or 46 physical bits 
respectively client or server, so in reality it's unlikely to have a 
mismatch.

The sad thing, and one that is new since the last time we discussed the 
issue, is that apparently Intel did have plans to support GAW < HAW:

    commit 0307b7b8c275e65552f6022a18ad91902ae25d42
    Author: Zhang Xiantao <xiantao.zhang@...el.com>
    Date:   Wed Dec 5 01:55:14 2012 +0800

    kvm: remove unnecessary bit checking for ept violation
    
    Bit 6 in EPT vmexit's exit qualification is not defined in SDM, so 
    remove it.
    
    Signed-off-by: Zhang Xiantao <xiantao.zhang@...el.com>
    Signed-off-by: Gleb Natapov <gleb@...hat.com>

    diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
    index 2fd2046dc94c..d2248b3dbb61 100644
    --- a/arch/x86/kvm/vmx.c
    +++ b/arch/x86/kvm/vmx.c
    @@ -4863,11 +4863,6 @@ static int handle_ept_violation(struct kvm_vcpu *vcpu)
     
            exit_qualification = vmcs_readl(EXIT_QUALIFICATION);
     
    -       if (exit_qualification & (1 << 6)) {
    -               printk(KERN_ERR "EPT: GPA exceeds GAW!\n");
    -               return -EINVAL;
    -       }
    -
            gla_validity = (exit_qualification >> 7) & 0x3;
            if (gla_validity != 0x3 && gla_validity != 0x1 && gla_validity != 0) {
                    printk(KERN_ERR "EPT: Handling EPT violation failed!\n");

Oh well. :(

Paolo

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ