[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f1ef3118-2a8e-4bf2-b3b0-60ac4947e106@redhat.com>
Date: Tue, 26 Jan 2021 21:47:50 +0100
From: Paolo Bonzini <pbonzini@...hat.com>
To: Ben Gardon <bgardon@...gle.com>
Cc: Sean Christopherson <seanjc@...gle.com>,
LKML <linux-kernel@...r.kernel.org>, kvm <kvm@...r.kernel.org>,
Peter Xu <peterx@...hat.com>, Peter Shier <pshier@...gle.com>,
Peter Feiner <pfeiner@...gle.com>,
Junaid Shahid <junaids@...gle.com>,
Jim Mattson <jmattson@...gle.com>,
Yulei Zhang <yulei.kernel@...il.com>,
Wanpeng Li <kernellwp@...il.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Xiao Guangrong <xiaoguangrong.eric@...il.com>
Subject: Re: [PATCH 15/24] kvm: mmu: Wrap mmu_lock cond_resched and needbreak
On 26/01/21 19:11, Ben Gardon wrote:
> When I did a strict replacement I found ~10% worse memory population
> performance.
> Running dirty_log_perf_test -v 96 -b 3g -i 5 with the TDP MMU
> disabled, I got 119 sec to populate memory as the baseline and 134 sec
> with an earlier version of this series which just replaced the
> spinlock with an rwlock. I believe this difference is statistically
> significant, but didn't run multiple trials.
> I didn't take notes when profiling, but I'm pretty sure the rwlock
> slowpath showed up a lot. This was a very high contention scenario, so
> it's probably not indicative of real-world performance.
> In the slow path, the rwlock is certainly slower than a spin lock.
>
> If the real impact doesn't seem too large, I'd be very happy to just
> replace the spinlock.
Ok, so let's use the union idea and add a "#define KVM_HAVE_MMU_RWLOCK"
to x86. The virt/kvm/kvm_main.c MMU notifiers functions can use the
#define to pick between write_lock and spin_lock.
For x86 I want to switch to tdp_mmu=1 by default as soon as parallel
page faults are in, so we can use the rwlock unconditionally and drop
the wrappers, except possibly for some kind of kvm_mmu_lock/unlock_root
that choose between read_lock for TDP MMU and write_lock for shadow MMU.
Thanks!
Paolo
Powered by blists - more mailing lists