[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHVum0f9JBE9vL4FW1_iH7mq6xjGQWG3J6TZ2Z-rRMQM8GocVg@mail.gmail.com>
Date: Thu, 9 Mar 2023 10:19:35 -0800
From: Vipin Sharma <vipinsh@...gle.com>
To: Zhi Wang <zhi.wang.linux@...il.com>
Cc: seanjc@...gle.com, pbonzini@...hat.com, bgardon@...gle.com,
dmatlack@...gle.com, jmattson@...gle.com, mizhang@...gle.com,
kvm@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [Patch v4 03/18] KVM: x86/mmu: Track count of pages in KVM MMU
page caches globally
On Thu, Mar 9, 2023 at 7:37 AM Zhi Wang <zhi.wang.linux@...il.com> wrote:
>
> On Mon, 6 Mar 2023 14:41:12 -0800
> Vipin Sharma <vipinsh@...gle.com> wrote:
> > /*
> > @@ -6994,3 +7048,11 @@ void kvm_mmu_pre_destroy_vm(struct kvm *kvm)
> > if (kvm->arch.nx_huge_page_recovery_thread)
> > kthread_stop(kvm->arch.nx_huge_page_recovery_thread);
> > }
> > +
> > +void *mmu_sp_memory_cache_alloc(struct kvm_mmu_memory_cache *shadow_page_cache,
> > + bool count_allocation)
>
> Is it necessary to have the control of count_allocation in every call of
> mmu_sp_memory_cache_alloc() instead of taking
> shadow_page_cache->count_shadow_page_allocation directly?
>
You have found in patch 7 that this is cleaned up.
Powered by blists - more mailing lists