[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <dba0ecc8-90ae-975f-7a27-3049d6951ba0@redhat.com>
Date: Fri, 25 Mar 2022 13:00:27 +0100
From: Paolo Bonzini <pbonzini@...hat.com>
To: Ben Gardon <bgardon@...gle.com>, linux-kernel@...r.kernel.org,
kvm@...r.kernel.org
Cc: Peter Xu <peterx@...hat.com>,
Sean Christopherson <seanjc@...gle.com>,
David Matlack <dmatlack@...gle.com>,
Jim Mattson <jmattson@...gle.com>,
David Dunn <daviddunn@...gle.com>,
Jing Zhang <jingzhangos@...gle.com>,
Junaid Shahid <junaids@...gle.com>
Subject: Re: [PATCH v2 0/9] KVM: x86/MMU: Optimize disabling dirty logging
On 3/21/22 23:43, Ben Gardon wrote:
> Currently disabling dirty logging with the TDP MMU is extremely slow.
> On a 96 vCPU / 96G VM it takes ~256 seconds to disable dirty logging
> with the TDP MMU, as opposed to ~4 seconds with the legacy MMU. This
> series optimizes TLB flushes and introduces in-place large page
> promotion, to bring the disable dirty log time down to ~3 seconds.
>
> Testing:
> Ran KVM selftests and kvm-unit-tests on an Intel Haswell. This
> series introduced no new failures.
Thanks, looks good. The one change I'd make is to just place the
outcome of build_tdp_shadow_zero_bits_mask() in a global (say
tdp_shadow_zero_check) at kvm_configure_mmu() time. The
tdp_max_root_level works as a conservative choice for the second
argument of build_tdp_shadow_zero_bits_mask().
No need to do anything though, I'll handle this later in 5.19 time (and
first merge my changes that factor out the constant part of
vcpu->arch.root_mmu initialization, since this is part of the same ideas).
Paolo
Powered by blists - more mailing lists