[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZBj/n3g/c0iqQAUj@google.com>
Date: Mon, 20 Mar 2023 17:51:43 -0700
From: Sean Christopherson <seanjc@...gle.com>
To: Vipin Sharma <vipinsh@...gle.com>
Cc: pbonzini@...hat.com, bgardon@...gle.com, dmatlack@...gle.com,
kvm@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [Patch v3 4/7] KVM: x86/mmu: Optimize SPTE change for aging gfn range
On Fri, Feb 10, 2023, Vipin Sharma wrote:
> } else {
> + new_spte = mark_spte_for_access_track(iter->old_spte);
> + iter->old_spte = kvm_tdp_mmu_write_spte(iter->sptep,
> + iter->old_spte, new_spte,
> + iter->level);
> /*
> * Capture the dirty status of the page, so that it doesn't get
> * lost when the SPTE is marked for access tracking.
> */
> - if (is_writable_pte(new_spte))
> - kvm_set_pfn_dirty(spte_to_pfn(new_spte));
> -
> - new_spte = mark_spte_for_access_track(new_spte);
> + if (is_writable_pte(iter->old_spte))
> + kvm_set_pfn_dirty(spte_to_pfn(iter->old_spte));
Moving this block below kvm_tdp_mmu_write_spte() is an unrelated change. Much to
my chagrin, I discovered that past me gave you this code. I still think the change
is correct, but I dropped it for now, mostly because the legacy/shadow MMU has the
same pattern (marks the PFN dirty before setting the SPTE).
I think this might actually be a bug fix, e.g. if the XCHG races with a fast page
fault fix and drops the Writable bit, the CPU could insert writable entry into the
TLB without KVM invoking kvm_set_pfn_dirty(). But I'm not 100% confident that I'm
not missing something, and _if_ there's a bug then mmu_spte_age() needs the same
fix, so for now, I dropped it.
Powered by blists - more mailing lists