lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHVum0ebkzXecZhEVC6DJyztX-aVD7mTmY6J6qgyBHM4sqT=vg@mail.gmail.com>
Date: Fri, 25 Oct 2024 10:36:32 -0700
From: Vipin Sharma <vipinsh@...gle.com>
To: Sean Christopherson <seanjc@...gle.com>
Cc: pbonzini@...hat.com, dmatlack@...gle.com, zhi.wang.linux@...il.com, 
	weijiang.yang@...el.com, mizhang@...gle.com, liangchen.linux@...il.com, 
	kvm@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 2/3] KVM: x86/mmu: Use MMU shrinker to shrink KVM MMU
 memory caches

On Thu, Oct 24, 2024 at 4:25 PM Sean Christopherson <seanjc@...gle.com> wrote:
>
> On Fri, Oct 04, 2024, Vipin Sharma wrote:
> > +out_mmu_memory_cache_unlock:
> > +     mutex_unlock(&vcpu->arch.mmu_memory_cache_lock);
>
> I've been thinking about this patch on and off for the past few weeks, and every
> time I come back to it I can't shake the feeling that we came up with a clever
> solution for a problem that doesn't exist.  I can't recall a single complaint
> about KVM consuming an unreasonable amount of memory for page tables.  In fact,
> the only time I can think of where the code in question caused problems was when
> I unintentionally inverted the iterator and zapped the newest SPs instead of the
> oldest SPs.
>
> So, I'm leaning more and more toward simply removing the shrinker integration.

One thing we can agree on is that we don't need MMU shrinker in its
current form because it is removing pages which are very well being
used by VM instead of shrinking its cache.

Regarding the current series, the biggest VM in GCE we can have 416
vCPUs, considering each thread can have 40 pages in its cache, total
cost gonna be around 65 MiB, doesn't seem much to me considering these
VMs have memory in TiB. Since caches in VMs are not unbounded, I think
it is fine to not have a MMU shrinker as its impact is miniscule in
KVM.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ