lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 9 Mar 2023 16:36:26 -0800
From:   Vipin Sharma <vipinsh@...gle.com>
To:     David Matlack <dmatlack@...gle.com>
Cc:     seanjc@...gle.com, pbonzini@...hat.com, bgardon@...gle.com,
        jmattson@...gle.com, mizhang@...gle.com, kvm@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [Patch v4 03/18] KVM: x86/mmu: Track count of pages in KVM MMU
 page caches globally

On Thu, Mar 9, 2023 at 4:22 PM David Matlack <dmatlack@...gle.com> wrote:
>
> On Mon, Mar 06, 2023 at 02:41:12PM -0800, Vipin Sharma wrote:
> >
> >  static void mmu_free_memory_caches(struct kvm_vcpu *vcpu)
> >  {
> >       kvm_mmu_free_memory_cache(&vcpu->arch.mmu_pte_list_desc_cache);
> > -     kvm_mmu_free_memory_cache(&vcpu->arch.mmu_shadow_page_cache);
> > +     mutex_lock(&vcpu->arch.mmu_shadow_page_cache_lock);
> > +     mmu_free_sp_memory_cache(&vcpu->arch.mmu_shadow_page_cache);
> > +     mutex_unlock(&vcpu->arch.mmu_shadow_page_cache_lock);
>
> Is this lock necessary (even when the shrinker is hooked up)?
> mmu_free_memory_caches() is only called when KVM fails to create a vCPU
> (before it has been added to vcpu_array) or during VM destruction (after
> the VM has been removed from vm_list).

My approach was if shrinker ran just before VM destruction and removed
pages, it would reduce nobjs variable in the cache. Now, when the VM
is being destroyed, mmu_free_sp_memory_cache() will first read the
nobjs variable to update the global counter and free the cache. To be
sure that the latest value is read and there is no memory ordering
issue I used mutex.

I discussed with Sean offline and he pointed out that x86 is strongly
ordered and mutex is not needed when freeing memory caches.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ