[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2a221116a03f57dca8274b6bc2da7541b21d86bb.camel@intel.com>
Date: Wed, 29 May 2024 10:50:57 +0000
From: "Huang, Kai" <kai.huang@...el.com>
To: "seanjc@...gle.com" <seanjc@...gle.com>
CC: "chenhuacai@...nel.org" <chenhuacai@...nel.org>, "kvm@...r.kernel.org"
<kvm@...r.kernel.org>, "maz@...nel.org" <maz@...nel.org>,
"frankja@...ux.ibm.com" <frankja@...ux.ibm.com>, "borntraeger@...ux.ibm.com"
<borntraeger@...ux.ibm.com>, "mpe@...erman.id.au" <mpe@...erman.id.au>,
"aou@...s.berkeley.edu" <aou@...s.berkeley.edu>, "palmer@...belt.com"
<palmer@...belt.com>, "linux-kernel@...r.kernel.org"
<linux-kernel@...r.kernel.org>, "maobibo@...ngson.cn" <maobibo@...ngson.cn>,
"pbonzini@...hat.com" <pbonzini@...hat.com>, "loongarch@...ts.linux.dev"
<loongarch@...ts.linux.dev>, "paul.walmsley@...ive.com"
<paul.walmsley@...ive.com>, "kvmarm@...ts.linux.dev"
<kvmarm@...ts.linux.dev>, "imbrenda@...ux.ibm.com" <imbrenda@...ux.ibm.com>,
"kvm-riscv@...ts.infradead.org" <kvm-riscv@...ts.infradead.org>,
"zhaotianrui@...ngson.cn" <zhaotianrui@...ngson.cn>,
"linuxppc-dev@...ts.ozlabs.org" <linuxppc-dev@...ts.ozlabs.org>,
"linux-arm-kernel@...ts.infradead.org"
<linux-arm-kernel@...ts.infradead.org>, "linux-mips@...r.kernel.org"
<linux-mips@...r.kernel.org>, "anup@...infault.org" <anup@...infault.org>,
"oliver.upton@...ux.dev" <oliver.upton@...ux.dev>,
"linux-riscv@...ts.infradead.org" <linux-riscv@...ts.infradead.org>
Subject: Re: [PATCH v2 3/6] KVM: x86: Fold kvm_arch_sched_in() into
kvm_arch_vcpu_load()
On Tue, 2024-05-28 at 12:16 -0700, Sean Christopherson wrote:
> On Fri, May 24, 2024, Kai Huang wrote:
> > > @@ -1548,6 +1548,9 @@ static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
> > > struct vcpu_svm *svm = to_svm(vcpu);
> > > struct svm_cpu_data *sd = per_cpu_ptr(&svm_data, cpu);
> > > + if (vcpu->scheduled_out && !kvm_pause_in_guest(vcpu->kvm))
> > > + shrink_ple_window(vcpu);
> > > +
> >
> > [...]
> >
> > > @@ -1517,6 +1517,9 @@ void vmx_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
> > > {
> > > struct vcpu_vmx *vmx = to_vmx(vcpu);
> > > + if (vcpu->scheduled_out && !kvm_pause_in_guest(vcpu->kvm))
> > > + shrink_ple_window(vcpu);
> > > +
> >
> > Nit: Perhaps we need a kvm_x86_ops::shrink_ple_window()? :-)
>
> Heh, that duplicate code annoys me too. The problem is the "old" window value
> comes from the VMCS/VMCB, so either we'd end up with multiple kvm_x86_ops, or
> we'd only be able to consolidate the scheduled_out + kvm_pause_in_guest() code,
> which isn't all that interesting.
Agreed only consolidating scheduled_out + kvm_pause_in_guest() isn't quite
interesting.
>
> Aha! Actually, VMX already open codes the functionality provided by VCPU_EXREG_*,
> e.g. has vmx->ple_window_dirty. If we add VCPU_EXREG_PLE_WINDOW, then the info
> get be made available to common x86 code without having to add new hooks. And
> that would also allow moving the guts of handle_pause()/pause_interception() to
> common code, i.e. will also allow deduplicating the "grow" side of things.
Sounds feasible. I am not sure whether we should use
VCPU_EXREG_PLE_WINDOW, though. We can just have "ple_window" +
"ple_window_dirty" concept in the vcpu:
vcpu->ple_window;
vcpu->ple_window_dirty;
I.e., kinda make current VMX's version of {grow|shrink}_ple_window() as
common code.
I am not familiar with SVM, but it seems the relevant parts are:
control->pause_filter_count;
vmcb_mark_dirty(svm->vmcb, VMCB_INTERCEPTS);
And it seems they are directly related to programming the hardware, i.e.,
they got automatically loaded to hardware during VMRUN.
They need to be updated in the SVM specific code when @ple_window_dirty is
true in the relevant code path.
Anyway, even it is feasible and worth to do, we should do in a separate
patchset.
Powered by blists - more mailing lists