[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240913214316.1945951-1-vipinsh@google.com>
Date: Fri, 13 Sep 2024 14:43:14 -0700
From: Vipin Sharma <vipinsh@...gle.com>
To: seanjc@...gle.com, pbonzini@...hat.com
Cc: dmatlack@...gle.com, zhi.wang.linux@...il.com, weijiang.yang@...el.com,
mizhang@...gle.com, liangchen.linux@...il.com, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org, Vipin Sharma <vipinsh@...gle.com>
Subject: [PATCH 0/2] KVM: x86/mmu: Repurpose MMU shrinker into page cache shrinker
This series is extracted out from the NUMA aware page table series[1].
MMU shrinker changes were in patches 1 to 9 in the old series.
This series is changing KVM MMU shrinker behaviour by emptying MMU page
caches which are used during page fault and MMU load operations. It also
incorporates feedback from the NUMA aware page table series[1] regarding
MMU shrinker.
KVM MMU shrinker has not been very effective in alleviating pain under
memory pressure. It frees up the pages actively being used which results
in VM degradation. VM will take fault and bring them again in page
tables. More discussions happened at [2]. Overall, consensus was to
reprupose it into the code which frees pages from KVM MMU page caches.
Recently [3], there was a discussion to disable shrinker for TDP MMU.
Revival of this series is result of that discussion.
There are two major differences from the old series.
1. There is no global accounting of cache pages. It is dynamically
calculated in mmu_shrink_count(). This has two effects; i) counting will
be inaccurate but code is much simpler, and ii) kvm_lock being used
here, this should be fine as mmu_shrink_scan() also holds the lock
for its operation.
2. Only empty mmu_shadow_page_cache and mmu_shadowed_info_cache. This
version doesn't empty split_shadow_page_cache as it is used only
during dirty logging operation and is one per VM unlike other two
which are per vCPU. I am not fully convinced that adding it is needed
as it will add the cost of adding one more mutex and synchronizing it
in shrinker. Also, if a VM is being dirty tracked most likely it will
be migrated (memory pressure might be the reason in the first place)
so better to not hinder migration effort and let vCPUs free up their
caches. If someone convinces me to add split cache as well then I can
send a separate patch to add that as well.
[1] https://lore.kernel.org/kvm/20230306224127.1689967-1-vipinsh@google.com/
[2] https://lore.kernel.org/lkml/Y45dldZnI6OIf+a5@google.com/
[3] https://lore.kernel.org/kvm/20240819214014.GA2313467.vipinsh@google.com/#t
v1:
- No global counting of pages in cache. As this number might not remain
same between calls of mmu_shrink_count() and mmu_shrink_scan().
- Count cache pages in mmu_shrink_count(). KVM can tolerate inaccuracy
here.
- Empty mmu_shadow_page_cache and mmu_shadowed_info_cache only. Don't
empty split_shadow_page_cache.
v0: Patches 1-9 from NUMA aware page table series.
https://lore.kernel.org/kvm/20230306224127.1689967-1-vipinsh@google.com/
Vipin Sharma (2):
KVM: x86/mmu: Change KVM mmu shrinker to no-op
KVM: x86/mmu: Use MMU shrinker to shrink KVM MMU memory caches
arch/x86/include/asm/kvm_host.h | 7 +-
arch/x86/kvm/mmu/mmu.c | 139 +++++++++++++-------------------
arch/x86/kvm/mmu/paging_tmpl.h | 14 ++--
include/linux/kvm_host.h | 1 +
virt/kvm/kvm_main.c | 8 +-
5 files changed, 78 insertions(+), 91 deletions(-)
base-commit: 12680d7b8ac4db2eba6237a21a93d2b0e78a52a6
--
2.46.0.662.g92d0881bb0-goog
Powered by blists - more mailing lists