[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAOcCaLaU+iXV_V4+B-VmQ5Eqb_4DdT9=2w8XAdx8TSmTzH8Zaw@mail.gmail.com>
Date: Thu, 26 Oct 2017 11:00:50 -0400
From: Nick Sarnie <commendsarnex@...il.com>
To: David Hildenbrand <david@...hat.com>
Cc: Paolo Bonzini <pbonzini@...hat.com>, linux-kernel@...r.kernel.org,
kvm@...r.kernel.org, stable@...r.kernel.org
Subject: Re: [PATCH] KVM: SVM: obey guest PAT
On Thu, Oct 26, 2017 at 4:17 AM, David Hildenbrand <david@...hat.com> wrote:
> On 26.10.2017 09:13, Paolo Bonzini wrote:
>> For many years some users of assigned devices have reported worse
>> performance on AMD processors with NPT than on AMD without NPT,
>> Intel or bare metal.
>>
>> The reason turned out to be that SVM is discarding the guest PAT
>> setting and uses the default (PA0=PA4=WB, PA1=PA5=WT, PA2=PA6=UC-,
>> PA3=UC). The guest might be using a different setting, and
>> especially might want write combining but isn't getting it
>> (instead getting slow UC or UC- accesses).
>>
>> Thanks a lot to geoff@...tfission.com for noticing the relation
>> to the g_pat setting. The patch has been tested also by a bunch
>> of people on VFIO users forums.
>>
>> Fixes: 709ddebf81cb40e3c36c6109a7892e8b93a09464
>> Fixes: https://bugzilla.kernel.org/show_bug.cgi?id=196409
>> Cc: stable@...r.kernel.org
>> Signed-off-by: Paolo Bonzini <pbonzini@...hat.com>
>> ---
>> arch/x86/kvm/svm.c | 7 +++++++
>> 1 file changed, 7 insertions(+)
>>
>> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
>> index af256b786a70..af09baa3d736 100644
>> --- a/arch/x86/kvm/svm.c
>> +++ b/arch/x86/kvm/svm.c
>> @@ -3626,6 +3626,13 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
>> u32 ecx = msr->index;
>> u64 data = msr->data;
>> switch (ecx) {
>> + case MSR_IA32_CR_PAT:
>> + if (!kvm_mtrr_valid(vcpu, MSR_IA32_CR_PAT, data))
>> + return 1;
>> + vcpu->arch.pat = data;
>> + svm->vmcb->save.g_pat = data;
>> + mark_dirty(svm->vmcb, VMCB_NPT);
>> + break;
>> case MSR_IA32_TSC:
>> kvm_write_tsc(vcpu, msr);
>> break;
>>
>
> Although no SVM expert, looking at the way it is handled on VMX, this
> looks good to me.
>
> Reviewed-by: David Hildenbrand <david@...hat.com>
>
> --
>
> Thanks,
>
> David
Tested-by: Nick Sarnie <commendsarnex@...il.com>
You're a legend.
Thanks,
Sarnex
Powered by blists - more mailing lists