[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y+1KrHYEi1D4T14P@google.com>
Date: Wed, 15 Feb 2023 13:12:12 -0800
From: David Matlack <dmatlack@...gle.com>
To: Vipin Sharma <vipinsh@...gle.com>
Cc: seanjc@...gle.com, pbonzini@...hat.com, bgardon@...gle.com,
kvm@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [Patch v3 2/7] KVM: x86/mmu: Atomically clear SPTE dirty state
in the clear-dirty-log flow
On Fri, Feb 10, 2023 at 05:46:21PM -0800, Vipin Sharma wrote:
> Do atomic-AND to clear the dirty state of SPTEs. Optimize clear-dirty-log
> flow by avoiding to go through __handle_changed_spte() and directly call
> kvm_set_pfn_dirty() instead.
>
> Atomic-AND allows to fetch the latest value in SPTE, clear only its
> dirty state and set the new SPTE value. This optimization avoids
> executing unnecessary checks by not calling __handle_changed_spte().
>
> With the removal of tdp_mmu_set_spte_no_dirty_log(), "record_dirty_log"
> parameter in __tdp_mmu_set_spte() is now obsolete. It will always be set
> to true by its caller. This dead code will be cleaned up in future
> commits.
>
> Tested on a VM (160 vCPUs, 160 GB memory) and found that performance of clear
> dirty log stage improved by ~40% in dirty_log_perf_test
>
> Before optimization:
> --------------------
> Iteration 1 clear dirty log time: 3.638543593s
> Iteration 2 clear dirty log time: 3.145032742s
> Iteration 3 clear dirty log time: 3.142340358s
> Clear dirty log over 3 iterations took 9.925916693s. (Avg 3.308638897s/iteration)
>
> After optimization:
> -------------------
> Iteration 1 clear dirty log time: 2.318988110s
> Iteration 2 clear dirty log time: 1.794470164s
> Iteration 3 clear dirty log time: 1.791668628s
> Clear dirty log over 3 iterations took 5.905126902s. (Avg 1.968375634s/iteration)
>
> Signed-off-by: Vipin Sharma <vipinsh@...gle.com>
Reviewed-by: David Matlack <dmatlack@...gle.com>
Powered by blists - more mailing lists