[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANgfPd9tBncLoVM4BnD5yq2O+=pXBN5_axBOh=bx=zjG7u8T7Q@mail.gmail.com>
Date: Tue, 6 Dec 2022 10:17:47 -0800
From: Ben Gardon <bgardon@...gle.com>
To: Vipin Sharma <vipinsh@...gle.com>
Cc: dmatlack@...gle.com, seanjc@...gle.com, pbonzini@...hat.com,
kvm@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [Patch v2 2/2] KVM: x86/mmu: Allocate page table pages on NUMA
node of underlying pages
On Mon, Dec 5, 2022 at 3:40 PM Vipin Sharma <vipinsh@...gle.com> wrote:
>
> On Mon, Dec 5, 2022 at 10:17 AM Ben Gardon <bgardon@...gle.com> wrote:
> >
> > On Thu, Dec 1, 2022 at 11:57 AM Vipin Sharma <vipinsh@...gle.com> wrote:
> > > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> > > index 1782c4555d94..4d59c9d48277 100644
> > > --- a/virt/kvm/kvm_main.c
> > > +++ b/virt/kvm/kvm_main.c
> > > @@ -384,6 +384,11 @@ static void kvm_flush_shadow_all(struct kvm *kvm)
> > > kvm_arch_guest_memory_reclaimed(kvm);
> > > }
> > >
> > > +void * __weak kvm_arch_mmu_get_free_page(int nid, gfp_t gfp_flags)
> > > +{
> > > + return (void *)__get_free_page(gfp_flags);
> > > +}
> > > +
> >
> > Rather than making this __weak, you could use #ifdef CONFIG_NUMA to
> > just put all the code in the arch-neutral function.
> >
>
> I am not sure how it will work. Here, I am trying to keep this feature
> only for x86. This function will be used for all architecture except
> in x86 where we have different implementation in arch/x86/mmu/mmu.c
> So, even if CONFIG_NUMA is defined, we want to keep the same
> definition on other architectures.
>
>
Something like:
+void * kvm_arch_mmu_get_free_page(int nid, gfp_t gfp_flags)
+{
+ struct page *spt_page;
+ void *address = NULL;
+
+ #ifdef CONFIG_NUMA
+ if (nid != NUMA_NO_NODE) {
+ spt_page = alloc_pages_node(nid, gfp, 0);
+ if (spt_page) {
+ address = page_address(spt_page);
+ return address;
+ }
+ }
+ #endif // CONFIG_NUMA
+ return (void *)__get_free_page(gfp);
+}
>
>
>
> > > #ifdef KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE
> > > static inline void *mmu_memory_cache_alloc_obj(struct kvm_mmu_memory_cache *mc,
> > > gfp_t gfp_flags)
> > > @@ -393,7 +398,7 @@ static inline void *mmu_memory_cache_alloc_obj(struct kvm_mmu_memory_cache *mc,
> > > if (mc->kmem_cache)
> > > return kmem_cache_alloc(mc->kmem_cache, gfp_flags);
> > > else
> > > - return (void *)__get_free_page(gfp_flags);
> > > + return kvm_arch_mmu_get_free_page(mc->node, gfp_flags);
> > > }
> > >
> > > int __kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int capacity, int min)
> > > --
> > > 2.39.0.rc0.267.gcb52ba06e7-goog
> > >
Powered by blists - more mailing lists