[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a0a695cb-8fe7-b68b-8d39-8be0e32f9f4d@loongson.cn>
Date: Mon, 24 Jun 2024 09:12:32 +0800
From: maobibo <maobibo@...ngson.cn>
To: Huacai Chen <chenhuacai@...nel.org>
Cc: Tianrui Zhao <zhaotianrui@...ngson.cn>, WANG Xuerui <kernel@...0n.name>,
Sean Christopherson <seanjc@...gle.com>, kvm@...r.kernel.org,
loongarch@...ts.linux.dev, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 6/6] LoongArch: KVM: Mark page accessed and dirty with
page ref added
On 2024/6/22 下午1:21, Huacai Chen wrote:
> Hi, Bibo,
>
> What is the relationship between this patch and the below one?
> https://lore.kernel.org/loongarch/20240611034609.3442344-1-maobibo@loongson.cn/T/#u
It is updated version about the patch listed at this website, I put all
migration relative patches into one patch set, to prevent that it is
lost in so many mail threads:)
Regards
Bibo Mao
>
>
> Huacai
>
> On Wed, Jun 19, 2024 at 4:09 PM Bibo Mao <maobibo@...ngson.cn> wrote:
>>
>> Function kvm_map_page_fast() is fast path of secondary mmu page fault
>> flow, pfn is parsed from secondary mmu page table walker. However
>> the corresponding page reference is not added, it is dangerious to
>> access page out of mmu_lock.
>>
>> Here page ref is added inside mmu_lock, function kvm_set_pfn_accessed()
>> and kvm_set_pfn_dirty() is called with page ref added, so that the
>> page will not be freed by others.
>>
>> Also kvm_set_pfn_accessed() is removed here since it is called in
>> the following function kvm_release_pfn_clean().
>>
>> Signed-off-by: Bibo Mao <maobibo@...ngson.cn>
>> ---
>> arch/loongarch/kvm/mmu.c | 23 +++++++++++++----------
>> 1 file changed, 13 insertions(+), 10 deletions(-)
>>
>> diff --git a/arch/loongarch/kvm/mmu.c b/arch/loongarch/kvm/mmu.c
>> index 3b862f3a72cb..5a820a81fd97 100644
>> --- a/arch/loongarch/kvm/mmu.c
>> +++ b/arch/loongarch/kvm/mmu.c
>> @@ -557,6 +557,7 @@ static int kvm_map_page_fast(struct kvm_vcpu *vcpu, unsigned long gpa, bool writ
>> gfn_t gfn = gpa >> PAGE_SHIFT;
>> struct kvm *kvm = vcpu->kvm;
>> struct kvm_memory_slot *slot;
>> + struct page *page;
>>
>> spin_lock(&kvm->mmu_lock);
>>
>> @@ -599,19 +600,22 @@ static int kvm_map_page_fast(struct kvm_vcpu *vcpu, unsigned long gpa, bool writ
>> if (changed) {
>> kvm_set_pte(ptep, new);
>> pfn = kvm_pte_pfn(new);
>> + page = kvm_pfn_to_refcounted_page(pfn);
>> + if (page)
>> + get_page(page);
>> }
>> spin_unlock(&kvm->mmu_lock);
>>
>> - /*
>> - * Fixme: pfn may be freed after mmu_lock
>> - * kvm_try_get_pfn(pfn)/kvm_release_pfn pair to prevent this?
>> - */
>> - if (kvm_pte_young(changed))
>> - kvm_set_pfn_accessed(pfn);
>> + if (changed) {
>> + if (kvm_pte_young(changed))
>> + kvm_set_pfn_accessed(pfn);
>>
>> - if (kvm_pte_dirty(changed)) {
>> - mark_page_dirty(kvm, gfn);
>> - kvm_set_pfn_dirty(pfn);
>> + if (kvm_pte_dirty(changed)) {
>> + mark_page_dirty(kvm, gfn);
>> + kvm_set_pfn_dirty(pfn);
>> + }
>> + if (page)
>> + put_page(page);
>> }
>> return ret;
>> out:
>> @@ -920,7 +924,6 @@ static int kvm_map_page(struct kvm_vcpu *vcpu, unsigned long gpa, bool write)
>> kvm_set_pfn_dirty(pfn);
>> }
>>
>> - kvm_set_pfn_accessed(pfn);
>> kvm_release_pfn_clean(pfn);
>> out:
>> srcu_read_unlock(&kvm->srcu, srcu_idx);
>> --
>> 2.39.3
>>
Powered by blists - more mailing lists