lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Zx_45FUW1QddzqOU@google.com>
Date: Mon, 28 Oct 2024 13:49:40 -0700
From: Sean Christopherson <seanjc@...gle.com>
To: David Matlack <dmatlack@...gle.com>
Cc: Vipin Sharma <vipinsh@...gle.com>, pbonzini@...hat.com, zhi.wang.linux@...il.com, 
	weijiang.yang@...el.com, mizhang@...gle.com, liangchen.linux@...il.com, 
	kvm@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 2/3] KVM: x86/mmu: Use MMU shrinker to shrink KVM MMU
 memory caches

On Mon, Oct 28, 2024, David Matlack wrote:
> On Fri, Oct 25, 2024 at 10:37 AM Vipin Sharma <vipinsh@...gle.com> wrote:
> >
> > On Thu, Oct 24, 2024 at 4:25 PM Sean Christopherson <seanjc@...gle.com> wrote:
> > >
> > > On Fri, Oct 04, 2024, Vipin Sharma wrote:
> > > > +out_mmu_memory_cache_unlock:
> > > > +     mutex_unlock(&vcpu->arch.mmu_memory_cache_lock);
> > >
> > > I've been thinking about this patch on and off for the past few weeks, and every
> > > time I come back to it I can't shake the feeling that we came up with a clever
> > > solution for a problem that doesn't exist.  I can't recall a single complaint
> > > about KVM consuming an unreasonable amount of memory for page tables.  In fact,
> > > the only time I can think of where the code in question caused problems was when
> > > I unintentionally inverted the iterator and zapped the newest SPs instead of the
> > > oldest SPs.
> > >
> > > So, I'm leaning more and more toward simply removing the shrinker integration.
> >
> > One thing we can agree on is that we don't need MMU shrinker in its
> > current form because it is removing pages which are very well being
> > used by VM instead of shrinking its cache.
> >
> > Regarding the current series, the biggest VM in GCE we can have 416
> > vCPUs, considering each thread can have 40 pages in its cache, total
> > cost gonna be around 65 MiB, doesn't seem much to me considering these
> > VMs have memory in TiB. Since caches in VMs are not unbounded, I think
> > it is fine to not have a MMU shrinker as its impact is miniscule in
> > KVM.
> 
> I have no objection to removing the shrinker entirely.

Let's do that.  In the unlikely scenario someone comes along with a strong use
case for purging the vCPU caches, we can always resurrect this approach.

Vipin, can you send a v3?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ