[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b3da0d70-30e4-e128-cb76-16d335829541@redhat.com>
Date: Fri, 23 Apr 2021 10:36:09 +1000
From: Gavin Shan <gshan@...hat.com>
To: Keqian Zhu <zhukeqian1@...wei.com>, linux-kernel@...r.kernel.org,
linux-arm-kernel@...ts.infradead.org, kvm@...r.kernel.org,
kvmarm@...ts.cs.columbia.edu, Marc Zyngier <maz@...nel.org>
Subject: Re: [PATCH v4 1/2] kvm/arm64: Remove the creation time's mapping of
MMIO regions
On 4/16/21 12:03 AM, Keqian Zhu wrote:
> The MMIO regions may be unmapped for many reasons and can be remapped
> by stage2 fault path. Map MMIO regions at creation time becomes a
> minor optimization and makes these two mapping path hard to sync.
>
> Remove the mapping code while keep the useful sanity check.
>
> Signed-off-by: Keqian Zhu <zhukeqian1@...wei.com>
> ---
> arch/arm64/kvm/mmu.c | 38 +++-----------------------------------
> 1 file changed, 3 insertions(+), 35 deletions(-)
>
Reviewed-by: Gavin Shan <gshan@...hat.com>
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index 8711894db8c2..c59af5ca01b0 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -1301,7 +1301,6 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
> {
> hva_t hva = mem->userspace_addr;
> hva_t reg_end = hva + mem->memory_size;
> - bool writable = !(mem->flags & KVM_MEM_READONLY);
> int ret = 0;
>
> if (change != KVM_MR_CREATE && change != KVM_MR_MOVE &&
> @@ -1318,8 +1317,7 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
> mmap_read_lock(current->mm);
> /*
> * A memory region could potentially cover multiple VMAs, and any holes
> - * between them, so iterate over all of them to find out if we can map
> - * any of them right now.
> + * between them, so iterate over all of them.
> *
> * +--------------------------------------------+
> * +---------------+----------------+ +----------------+
> @@ -1330,50 +1328,20 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
> */
> do {
> struct vm_area_struct *vma = find_vma(current->mm, hva);
> - hva_t vm_start, vm_end;
>
> if (!vma || vma->vm_start >= reg_end)
> break;
>
> - /*
> - * Take the intersection of this VMA with the memory region
> - */
> - vm_start = max(hva, vma->vm_start);
> - vm_end = min(reg_end, vma->vm_end);
> -
> if (vma->vm_flags & VM_PFNMAP) {
> - gpa_t gpa = mem->guest_phys_addr +
> - (vm_start - mem->userspace_addr);
> - phys_addr_t pa;
> -
> - pa = (phys_addr_t)vma->vm_pgoff << PAGE_SHIFT;
> - pa += vm_start - vma->vm_start;
> -
> /* IO region dirty page logging not allowed */
> if (memslot->flags & KVM_MEM_LOG_DIRTY_PAGES) {
> ret = -EINVAL;
> - goto out;
> - }
> -
> - ret = kvm_phys_addr_ioremap(kvm, gpa, pa,
> - vm_end - vm_start,
> - writable);
> - if (ret)
> break;
> + }
> }
> - hva = vm_end;
> + hva = min(reg_end, vma->vm_end);
> } while (hva < reg_end);
>
> - if (change == KVM_MR_FLAGS_ONLY)
> - goto out;
> -
> - spin_lock(&kvm->mmu_lock);
> - if (ret)
> - unmap_stage2_range(&kvm->arch.mmu, mem->guest_phys_addr, mem->memory_size);
> - else if (!cpus_have_final_cap(ARM64_HAS_STAGE2_FWB))
> - stage2_flush_memslot(kvm, memslot);
> - spin_unlock(&kvm->mmu_lock);
> -out:
> mmap_read_unlock(current->mm);
> return ret;
> }
>
Powered by blists - more mailing lists