[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHVum0eXVwpwsrVC21XN1H0JvJ_QWnr3ERPYvSyRpwudVFtg8Q@mail.gmail.com>
Date: Fri, 4 Oct 2024 13:04:58 -0700
From: Vipin Sharma <vipinsh@...gle.com>
To: seanjc@...gle.com, pbonzini@...hat.com, dmatlack@...gle.com
Cc: zhi.wang.linux@...il.com, weijiang.yang@...el.com, mizhang@...gle.com,
liangchen.linux@...il.com, kvm@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 2/3] KVM: x86/mmu: Use MMU shrinker to shrink KVM MMU
memory caches
On Fri, Oct 4, 2024 at 12:55 PM Vipin Sharma <vipinsh@...gle.com> wrote:
>
> Use MMU shrinker to iterate through all the vCPUs of all the VMs and
> free pages allocated in MMU memory caches. Protect cache allocation in
> page fault and MMU load path from MMU shrinker by using a per vCPU
> mutex. In MMU shrinker, move the iterated VM to the end of the VMs list
> so that the pain of emptying cache spread among other VMs too.
>
> The specific caches to empty are mmu_shadow_page_cache and
> mmu_shadowed_info_cache as these caches store whole pages. Emptying them
> will give more impact to shrinker compared to other caches like
> mmu_pte_list_desc_cache{} and mmu_page_header_cache{}
>
> Holding per vCPU mutex lock ensures that a vCPU doesn't get surprised
> by finding its cache emptied after filling them up for page table
> allocations during page fault handling and MMU load operation. Per vCPU
> mutex also makes sure there is only race between MMU shrinker and all
> other vCPUs. This should result in very less contention.
>
> Signed-off-by: Vipin Sharma <vipinsh@...gle.com>
I also meant to add
Suggested-by: Sean Christopherson <seanjc@...gle.com>
Suggested-by: David Matlack <dmatlack@...gle.com>
I can send v3 or please take it from v2.
Powered by blists - more mailing lists