[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CY4PR12MB17680EEFE942F21483A47E4095A30@CY4PR12MB1768.namprd12.prod.outlook.com>
Date: Wed, 28 Mar 2018 21:14:06 +0000
From: "Moger, Babu" <Babu.Moger@....com>
To: Radim Krčmář <rkrcmar@...hat.com>
CC: "joro@...tes.org" <joro@...tes.org>,
"tglx@...utronix.de" <tglx@...utronix.de>,
"mingo@...hat.com" <mingo@...hat.com>,
"hpa@...or.com" <hpa@...or.com>, "x86@...nel.org" <x86@...nel.org>,
"pbonzini@...hat.com" <pbonzini@...hat.com>,
"kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: RE: [PATCH v2 5/5] KVM: SVM: Implement pause loop exit logic in SVM
> -----Original Message-----
> From: Radim Krčmář <rkrcmar@...hat.com>
> Sent: Wednesday, March 28, 2018 3:31 PM
> To: Moger, Babu <Babu.Moger@....com>
> Cc: joro@...tes.org; tglx@...utronix.de; mingo@...hat.com;
> hpa@...or.com; x86@...nel.org; pbonzini@...hat.com;
> kvm@...r.kernel.org; linux-kernel@...r.kernel.org
> Subject: Re: [PATCH v2 5/5] KVM: SVM: Implement pause loop exit logic in
> SVM
>
> 2018-03-16 16:37-0400, Babu Moger:
> > Bring the PLE(pause loop exit) logic to AMD svm driver.
> >
> > While testing, we found this helping in situations where numerous
> > pauses are generated. Without these patches we could see continuos
> > VMEXITS due to pause interceptions. Tested it on AMD EPYC server with
> > boot parameter idle=poll on a VM with 32 vcpus to simulate extensive
> > pause behaviour. Here are VMEXITS in 10 seconds interval.
> >
> > #VMEXITS Before the change After the change
> > Pauses 810199 504
> > Total 882184 325415
> >
> > Signed-off-by: Babu Moger <babu.moger@....com>
> > ---
> > diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> > @@ -1046,6 +1094,42 @@ static int avic_ga_log_notifier(u32 ga_tag)
> > return 0;
> > }
> >
> > +static void grow_ple_window(struct kvm_vcpu *vcpu)
> > +{
> > + struct vcpu_svm *svm = to_svm(vcpu);
> > + struct vmcb_control_area *control = &svm->vmcb->control;
> > + int old = control->pause_filter_count;
> > +
> > + control->pause_filter_count = __grow_ple_window(old,
> > + pause_filter_count,
> > +
> pause_filter_count_grow,
> > +
> pause_filter_count_max);
> > +
> > + if (control->pause_filter_count != old)
> > + mark_dirty(svm->vmcb, VMCB_INTERCEPTS);
> > +
> > + trace_kvm_ple_window_grow(vcpu->vcpu_id,
> > + control->pause_filter_count, old);
> > +}
> > +
> > +static void shrink_ple_window(struct kvm_vcpu *vcpu)
> > +{
> > + struct vcpu_svm *svm = to_svm(vcpu);
> > + struct vmcb_control_area *control = &svm->vmcb->control;
> > + int old = control->pause_filter_count;
> > +
> > + control->pause_filter_count =
> > + __shrink_ple_window(old,
> > + pause_filter_count,
> > + pause_filter_count_shrink,
> > + 0);
>
> I've used pause_filter_count as minumum here as well and in all patches
> used 'unsigned int' instead of 'uint' in the code too match the rest of
> the kernel.
>
> The series is in kvm/queue, please look at the changes and tell me if
> you'd like something done differently, thanks.
Ok. Looks good to me. Thanks.
Powered by blists - more mailing lists