[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YvOiZ2jp2Fv0Ex0J@phenom.ffwll.local>
Date: Wed, 10 Aug 2022 14:19:51 +0200
From: Daniel Vetter <daniel@...ll.ch>
To: Yonghua Huang <yonghua.huang@...el.com>
Cc: gregkh@...uxfoundation.org, linux-kernel@...r.kernel.org,
stable@...r.kernel.org, reinette.chatre@...el.com,
zhi.a.wang@...el.com, yu1.wang@...el.com, fei1.Li@...el.com,
Linux MM <linux-mm@...ck.org>,
DRI Development <dri-devel@...ts.freedesktop.org>
Subject: Re: [PATCH] virt: acrn: obtain pa from VMA with PFNMAP flag
On Mon, Feb 28, 2022 at 05:22:12AM +0300, Yonghua Huang wrote:
> acrn_vm_ram_map can't pin the user pages with VM_PFNMAP flag
> by calling get_user_pages_fast(), the PA(physical pages)
> may be mapped by kernel driver and set PFNMAP flag.
>
> This patch fixes logic to setup EPT mapping for PFN mapped RAM region
> by checking the memory attribute before adding EPT mapping for them.
>
> Fixes: 88f537d5e8dd ("virt: acrn: Introduce EPT mapping management")
> Signed-off-by: Yonghua Huang <yonghua.huang@...el.com>
> Signed-off-by: Fei Li <fei1.li@...el.com>
> ---
> drivers/virt/acrn/mm.c | 24 ++++++++++++++++++++++++
> 1 file changed, 24 insertions(+)
>
> diff --git a/drivers/virt/acrn/mm.c b/drivers/virt/acrn/mm.c
> index c4f2e15c8a2b..3b1b1e7a844b 100644
> --- a/drivers/virt/acrn/mm.c
> +++ b/drivers/virt/acrn/mm.c
> @@ -162,10 +162,34 @@ int acrn_vm_ram_map(struct acrn_vm *vm, struct acrn_vm_memmap *memmap)
> void *remap_vaddr;
> int ret, pinned;
> u64 user_vm_pa;
> + unsigned long pfn;
> + struct vm_area_struct *vma;
>
> if (!vm || !memmap)
> return -EINVAL;
>
> + mmap_read_lock(current->mm);
> + vma = vma_lookup(current->mm, memmap->vma_base);
> + if (vma && ((vma->vm_flags & VM_PFNMAP) != 0)) {
> + if ((memmap->vma_base + memmap->len) > vma->vm_end) {
> + mmap_read_unlock(current->mm);
> + return -EINVAL;
> + }
> +
> + ret = follow_pfn(vma, memmap->vma_base, &pfn);
This races, don't use follow_pfn() and most definitely don't add new
users. In some cases follow_pte, but the pte/pfn is still only valid for
as long as you hold the pte spinlock.
> + mmap_read_unlock(current->mm);
Definitely after here there's zero guarantees about this pfn and it could
point at anything.
Please fix, I tried pretty hard to get rid of follow_pfn(), but some of
them are just too hard to fix (e.g. kvm needs a pretty hug rewrite to get
it all sorted).
Cheers, Daniel
> + if (ret < 0) {
> + dev_dbg(acrn_dev.this_device,
> + "Failed to lookup PFN at VMA:%pK.\n", (void *)memmap->vma_base);
> + return ret;
> + }
> +
> + return acrn_mm_region_add(vm, memmap->user_vm_pa,
> + PFN_PHYS(pfn), memmap->len,
> + ACRN_MEM_TYPE_WB, memmap->attr);
> + }
> + mmap_read_unlock(current->mm);
> +
> /* Get the page number of the map region */
> nr_pages = memmap->len >> PAGE_SHIFT;
> pages = vzalloc(nr_pages * sizeof(struct page *));
>
> base-commit: 73878e5eb1bd3c9656685ca60bc3a49d17311e0c
> --
> 2.25.1
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
Powered by blists - more mailing lists