lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHVum0fhU2PAQEerG5t92R1ropoh1-ML4Yv1CzwGThRtbbvWHg@mail.gmail.com>
Date:   Thu, 9 Mar 2023 11:59:00 -0800
From:   Vipin Sharma <vipinsh@...gle.com>
To:     Zhi Wang <zhi.wang.linux@...il.com>
Cc:     seanjc@...gle.com, pbonzini@...hat.com, bgardon@...gle.com,
        dmatlack@...gle.com, jmattson@...gle.com, mizhang@...gle.com,
        kvm@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [Patch v4 05/18] KVM: x86/mmu: Add split_shadow_page_cache pages
 to global count of MMU cache pages

On Thu, Mar 9, 2023 at 7:58 AM Zhi Wang <zhi.wang.linux@...il.com> wrote:
>
> On Mon,  6 Mar 2023 14:41:14 -0800
> Vipin Sharma <vipinsh@...gle.com> wrote:
>
> > Add pages in split_shadow_page_cache to the global counter
> > kvm_total_unused_cached_pages. These pages will be freed by MMU shrinker
> > in future commit.
> >
> > Signed-off-by: Vipin Sharma <vipinsh@...gle.com>
> > ---
> >  arch/x86/kvm/mmu/mmu.c | 7 +++++--
> >  1 file changed, 5 insertions(+), 2 deletions(-)
> >
> > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> > index df8dcb7e5de7..0ebb8a2eaf47 100644
> > --- a/arch/x86/kvm/mmu/mmu.c
> > +++ b/arch/x86/kvm/mmu/mmu.c
> > @@ -6149,7 +6149,9 @@ static void mmu_free_vm_memory_caches(struct kvm *kvm)
> >  {
> >       kvm_mmu_free_memory_cache(&kvm->arch.split_desc_cache);
> >       kvm_mmu_free_memory_cache(&kvm->arch.split_page_header_cache);
> > -     kvm_mmu_free_memory_cache(&kvm->arch.split_shadow_page_cache);
> > +     mutex_lock(&kvm->slots_lock);
> > +     mmu_free_sp_memory_cache(&kvm->arch.split_shadow_page_cache);
> > +     mutex_unlock(&kvm->slots_lock);
>
> Taking the lock of the calling path in the layer of cache topping/free layer
> seems off.
>
> My vote goes to have a lock for each cache and take the lock of the cache when
> topping/free the cache. It is more self-contained and architecturally nice.
>

Yeah, this can be one way. However, in future patches when I am adding
per NUMA node cache, it will add up a lot of locks for the same code
path before a topup. In split huge page case we know what NUMA node we
need to allocate from so we can fine tune which lock to take but  in
fault path code we don't know what NUMA node the page will be coming
from so we need to topup all of the NUMA caches. Having a single lock
simplifies code a little bit.

I agree with you on being more self-contained. I will wait for others
to also chime in on this and go from there.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ