lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <72749501-974d-e11c-1fa4-fda00e594264@redhat.com>
Date:   Thu, 26 Oct 2017 10:17:23 +0200
From:   David Hildenbrand <david@...hat.com>
To:     Paolo Bonzini <pbonzini@...hat.com>, linux-kernel@...r.kernel.org,
        kvm@...r.kernel.org
Cc:     stable@...r.kernel.org
Subject: Re: [PATCH] KVM: SVM: obey guest PAT

On 26.10.2017 09:13, Paolo Bonzini wrote:
> For many years some users of assigned devices have reported worse
> performance on AMD processors with NPT than on AMD without NPT,
> Intel or bare metal.
> 
> The reason turned out to be that SVM is discarding the guest PAT
> setting and uses the default (PA0=PA4=WB, PA1=PA5=WT, PA2=PA6=UC-,
> PA3=UC).  The guest might be using a different setting, and
> especially might want write combining but isn't getting it
> (instead getting slow UC or UC- accesses).
> 
> Thanks a lot to geoff@...tfission.com for noticing the relation
> to the g_pat setting.  The patch has been tested also by a bunch
> of people on VFIO users forums.
> 
> Fixes: 709ddebf81cb40e3c36c6109a7892e8b93a09464
> Fixes: https://bugzilla.kernel.org/show_bug.cgi?id=196409
> Cc: stable@...r.kernel.org
> Signed-off-by: Paolo Bonzini <pbonzini@...hat.com>
> ---
>  arch/x86/kvm/svm.c | 7 +++++++
>  1 file changed, 7 insertions(+)
> 
> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> index af256b786a70..af09baa3d736 100644
> --- a/arch/x86/kvm/svm.c
> +++ b/arch/x86/kvm/svm.c
> @@ -3626,6 +3626,13 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
>  	u32 ecx = msr->index;
>  	u64 data = msr->data;
>  	switch (ecx) {
> +	case MSR_IA32_CR_PAT:
> +		if (!kvm_mtrr_valid(vcpu, MSR_IA32_CR_PAT, data))
> +			return 1;
> +		vcpu->arch.pat = data;
> +		svm->vmcb->save.g_pat = data;
> +		mark_dirty(svm->vmcb, VMCB_NPT);
> +		break;
>  	case MSR_IA32_TSC:
>  		kvm_write_tsc(vcpu, msr);
>  		break;
> 

Although no SVM expert, looking at the way it is handled on VMX, this
looks good to me.

Reviewed-by: David Hildenbrand <david@...hat.com>

-- 

Thanks,

David

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ