[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4858fb924edbda58b6c46bdd4ed803bda0ceebbb.camel@redhat.com>
Date: Mon, 24 Aug 2020 14:43:22 +0300
From: Maxim Levitsky <mlevitsk@...hat.com>
To: Jim Mattson <jmattson@...gle.com>
Cc: kvm list <kvm@...r.kernel.org>,
"maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)" <x86@...nel.org>,
Ingo Molnar <mingo@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
"open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)"
<linux-kernel@...r.kernel.org>, "H. Peter Anvin" <hpa@...or.com>,
Sean Christopherson <sean.j.christopherson@...el.com>,
Joerg Roedel <joro@...tes.org>,
Wanpeng Li <wanpengli@...cent.com>,
Borislav Petkov <bp@...en8.de>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Paolo Bonzini <pbonzini@...hat.com>
Subject: Re: [PATCH v2 3/7] KVM: SVM: refactor msr permission bitmap
allocation
On Thu, 2020-08-20 at 14:26 -0700, Jim Mattson wrote:
> On Thu, Aug 20, 2020 at 6:34 AM Maxim Levitsky <mlevitsk@...hat.com> wrote:
> > Replace svm_vcpu_init_msrpm with svm_vcpu_alloc_msrpm, that also allocates
> > the msr bitmap and add svm_vcpu_free_msrpm to free it.
> >
> > This will be used later to move the nested msr permission bitmap allocation
> > to nested.c
> >
> > No functional change intended.
> >
> > Signed-off-by: Maxim Levitsky <mlevitsk@...hat.com>
> > ---
> > arch/x86/kvm/svm/svm.c | 45 +++++++++++++++++++++---------------------
> > 1 file changed, 23 insertions(+), 22 deletions(-)
> >
> > diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
> > index d33013b9b4d7..7bb094bf6494 100644
> > --- a/arch/x86/kvm/svm/svm.c
> > +++ b/arch/x86/kvm/svm/svm.c
> > @@ -609,18 +609,29 @@ static void set_msr_interception(u32 *msrpm, unsigned msr,
> > msrpm[offset] = tmp;
> > }
> >
> > -static void svm_vcpu_init_msrpm(u32 *msrpm)
> > +static u32 *svm_vcpu_alloc_msrpm(void)
>
> I prefer the original name, since this function does more than allocation.
But it also allocates it. I don't mind using the old name though.
>
> > {
> > int i;
> > + u32 *msrpm;
> > + struct page *pages = alloc_pages(GFP_KERNEL_ACCOUNT, MSRPM_ALLOC_ORDER);
> > +
> > + if (!pages)
> > + return NULL;
> >
> > + msrpm = page_address(pages);
> > memset(msrpm, 0xff, PAGE_SIZE * (1 << MSRPM_ALLOC_ORDER));
> >
> > for (i = 0; direct_access_msrs[i].index != MSR_INVALID; i++) {
> > if (!direct_access_msrs[i].always)
> > continue;
> > -
> > set_msr_interception(msrpm, direct_access_msrs[i].index, 1, 1);
> > }
> > + return msrpm;
> > +}
> > +
> > +static void svm_vcpu_free_msrpm(u32 *msrpm)
> > +{
> > + __free_pages(virt_to_page(msrpm), MSRPM_ALLOC_ORDER);
> > }
> >
> > static void add_msr_offset(u32 offset)
> > @@ -1172,9 +1183,7 @@ static int svm_create_vcpu(struct kvm_vcpu *vcpu)
> > {
> > struct vcpu_svm *svm;
> > struct page *vmcb_page;
> > - struct page *msrpm_pages;
> > struct page *hsave_page;
> > - struct page *nested_msrpm_pages;
> > int err;
> >
> > BUILD_BUG_ON(offsetof(struct vcpu_svm, vcpu) != 0);
> > @@ -1185,21 +1194,13 @@ static int svm_create_vcpu(struct kvm_vcpu *vcpu)
> > if (!vmcb_page)
> > goto out;
> >
> > - msrpm_pages = alloc_pages(GFP_KERNEL_ACCOUNT, MSRPM_ALLOC_ORDER);
> > - if (!msrpm_pages)
> > - goto free_page1;
> > -
> > - nested_msrpm_pages = alloc_pages(GFP_KERNEL_ACCOUNT, MSRPM_ALLOC_ORDER);
> > - if (!nested_msrpm_pages)
> > - goto free_page2;
> > -
>
> Reordering the allocations does seem like a functional change to me,
> albeit one that should (hopefully) be benign. For example, if the
> MSRPM_ALLOC_ORDER allocations fail, in the new version of the code,
> the hsave_page will be cleared, but in the old version of the code, no
> page would be cleared.
Noted.
>
> > hsave_page = alloc_page(GFP_KERNEL_ACCOUNT);
>
> Speaking of clearing pages, why not add __GFP_ZERO to the flags above
> and skip the clear_page() call below?
I haven't thought about it, I don't see a reason to not use __GFP_ZERO,
but this is how the old code was.
>
> > if (!hsave_page)
> > - goto free_page3;
> > + goto free_page1;
> >
> > err = avic_init_vcpu(svm);
> > if (err)
> > - goto free_page4;
> > + goto free_page2;
> >
> > /* We initialize this flag to true to make sure that the is_running
> > * bit would be set the first time the vcpu is loaded.
> > @@ -1210,11 +1211,13 @@ static int svm_create_vcpu(struct kvm_vcpu *vcpu)
> > svm->nested.hsave = page_address(hsave_page);
> > clear_page(svm->nested.hsave);
> >
> > - svm->msrpm = page_address(msrpm_pages);
> > - svm_vcpu_init_msrpm(svm->msrpm);
> > + svm->msrpm = svm_vcpu_alloc_msrpm();
> > + if (!svm->msrpm)
> > + goto free_page2;
> >
> > - svm->nested.msrpm = page_address(nested_msrpm_pages);
> > - svm_vcpu_init_msrpm(svm->nested.msrpm);
> > + svm->nested.msrpm = svm_vcpu_alloc_msrpm();
> > + if (!svm->nested.msrpm)
> > + goto free_page3;
> >
> > svm->vmcb = page_address(vmcb_page);
> > clear_page(svm->vmcb);
> > @@ -1227,12 +1230,10 @@ static int svm_create_vcpu(struct kvm_vcpu *vcpu)
> >
> > return 0;
> >
> > -free_page4:
> > - __free_page(hsave_page);
> > free_page3:
> > - __free_pages(nested_msrpm_pages, MSRPM_ALLOC_ORDER);
> > + svm_vcpu_free_msrpm(svm->msrpm);
> > free_page2:
> > - __free_pages(msrpm_pages, MSRPM_ALLOC_ORDER);
> > + __free_page(hsave_page);
> > free_page1:
> > __free_page(vmcb_page);
> > out:
>
> While you're here, could you improve these labels? Coding-style.rst says:
>
> Choose label names which say what the goto does or why the goto exists. An
> example of a good name could be ``out_free_buffer:`` if the goto frees
> ``buffer``.
> Avoid using GW-BASIC names like ``err1:`` and ``err2:``, as you would have to
> renumber them if you ever add or remove exit paths, and they make correctness
> difficult to verify anyway.
I noticed that and I agree. I'll do this in follow up patch.
Thanks for review,
Best regards,
Maxim Levitsky
>
Powered by blists - more mailing lists