[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <ef75a78b-aa3a-e1b9-96b7-37425f3d9165@huawei.com>
Date: Wed, 7 Apr 2021 09:31:59 +0800
From: Keqian Zhu <zhukeqian1@...wei.com>
To: Sean Christopherson <seanjc@...gle.com>
CC: Paolo Bonzini <pbonzini@...hat.com>,
<linux-kernel@...r.kernel.org>, <kvm@...r.kernel.org>,
Ben Gardon <bgardon@...gle.com>
Subject: Re: [PATCH] KVM: MMU: protect TDP MMU pages only down to required
level
On 2021/4/7 7:38, Sean Christopherson wrote:
> On Tue, Apr 06, 2021, Keqian Zhu wrote:
>> Hi Paolo,
>>
>> I'm just going to fix this issue, and found that you have done this ;-)
>
> Ha, and meanwhile I'm having a serious case of deja vu[1]. It even received a
> variant of the magic "Queued, thanks"[2]. Doesn't appear in either of the 5.12
> pull requests though, must have gotten lost along the way.
Good job. We should pick them up :)
>
> [1] https://lkml.kernel.org/r/20210213005015.1651772-3-seanjc@google.com
> [2] https://lkml.kernel.org/r/b5ab72f2-970f-64bd-891c-48f1c303548d@redhat.com
>
>> Please feel free to add:
>>
>> Reviewed-by: Keqian Zhu <zhukeqian1@...wei.com>
>>
>> Thanks,
>> Keqian
>>
>> On 2021/4/2 20:17, Paolo Bonzini wrote:
>>> When using manual protection of dirty pages, it is not necessary
>>> to protect nested page tables down to the 4K level; instead KVM
>>> can protect only hugepages in order to split them lazily, and
>>> delay write protection at 4K-granularity until KVM_CLEAR_DIRTY_LOG.
>>> This was overlooked in the TDP MMU, so do it there as well.
>>>
>>> Fixes: a6a0b05da9f37 ("kvm: x86/mmu: Support dirty logging for the TDP MMU")
>>> Cc: Ben Gardon <bgardon@...gle.com>
>>> Signed-off-by: Paolo Bonzini <pbonzini@...hat.com>
>>> ---
>>> arch/x86/kvm/mmu/mmu.c | 2 +-
>>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
>>> index efb41f31e80a..0d92a269c5fa 100644
>>> --- a/arch/x86/kvm/mmu/mmu.c
>>> +++ b/arch/x86/kvm/mmu/mmu.c
>>> @@ -5538,7 +5538,7 @@ void kvm_mmu_slot_remove_write_access(struct kvm *kvm,
>>> flush = slot_handle_level(kvm, memslot, slot_rmap_write_protect,
>>> start_level, KVM_MAX_HUGEPAGE_LEVEL, false);
>>> if (is_tdp_mmu_enabled(kvm))
>>> - flush |= kvm_tdp_mmu_wrprot_slot(kvm, memslot, PG_LEVEL_4K);
>>> + flush |= kvm_tdp_mmu_wrprot_slot(kvm, memslot, start_level);
>>> write_unlock(&kvm->mmu_lock);
>>>
>>> /*
>>>
> .
>
Powered by blists - more mailing lists