[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d49fad17-f515-d4f2-cef2-4108c8375747@amd.com>
Date: Mon, 22 Mar 2021 08:47:50 +0100
From: Christian König <christian.koenig@....com>
To: Thomas Hellström (Intel)
<thomas_os@...pmail.org>, dri-devel@...ts.freedesktop.org
Cc: David Airlie <airlied@...ux.ie>, Daniel Vetter <daniel@...ll.ch>,
Andrew Morton <akpm@...ux-foundation.org>,
Jason Gunthorpe <jgg@...dia.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH 2/2] mm,drm/ttm: Use VM_PFNMAP for TTM vmas
Am 21.03.21 um 19:45 schrieb Thomas Hellström (Intel):
> To block fast gup we need to make sure TTM ptes are always special.
> With MIXEDMAP we, on architectures that don't support pte_special,
> insert normal ptes, but OTOH on those architectures, fast is not
> supported.
> At the same time, the function documentation to vm_normal_page() suggests
> that ptes pointing to system memory pages of MIXEDMAP vmas are always
> normal, but that doesn't seem consistent with what's implemented in
> vmf_insert_mixed(). I'm thus not entirely sure this patch is actually
> needed.
>
> But to make sure and to avoid also normal (non-fast) gup, make all
> TTM vmas PFNMAP. With PFNMAP we can't allow COW mappings
> anymore so make is_cow_mapping() available and use it to reject
> COW mappigs at mmap time.
I would separate the disallowing of COW mapping from the PFN change. I'm
pretty sure that COW mappings never worked on TTM BOs in the first place.
But either way this patch is Reviewed-by: Christian König
<christian.koenig@....com>.
Thanks,
Christian.
>
> There was previously a comment in the code that WC mappings together
> with x86 PAT + PFNMAP was bad for performance. However from looking at
> vmf_insert_mixed() it looks like in the current code PFNMAP and MIXEDMAP
> are handled the same for architectures that support pte_special. This
> means there should not be a performance difference anymore, but this
> needs to be verified.
>
> Cc: Christian Koenig <christian.koenig@....com>
> Cc: David Airlie <airlied@...ux.ie>
> Cc: Daniel Vetter <daniel@...ll.ch>
> Cc: Andrew Morton <akpm@...ux-foundation.org>
> Cc: Jason Gunthorpe <jgg@...dia.com>
> Cc: linux-mm@...ck.org
> Cc: dri-devel@...ts.freedesktop.org
> Cc: linux-kernel@...r.kernel.org
> Signed-off-by: Thomas Hellström (Intel) <thomas_os@...pmail.org>
> ---
> drivers/gpu/drm/ttm/ttm_bo_vm.c | 22 ++++++++--------------
> include/linux/mm.h | 5 +++++
> mm/internal.h | 5 -----
> 3 files changed, 13 insertions(+), 19 deletions(-)
>
> diff --git a/drivers/gpu/drm/ttm/ttm_bo_vm.c b/drivers/gpu/drm/ttm/ttm_bo_vm.c
> index 1c34983480e5..708c6fb9be81 100644
> --- a/drivers/gpu/drm/ttm/ttm_bo_vm.c
> +++ b/drivers/gpu/drm/ttm/ttm_bo_vm.c
> @@ -372,12 +372,7 @@ vm_fault_t ttm_bo_vm_fault_reserved(struct vm_fault *vmf,
> * at arbitrary times while the data is mmap'ed.
> * See vmf_insert_mixed_prot() for a discussion.
> */
> - if (vma->vm_flags & VM_MIXEDMAP)
> - ret = vmf_insert_mixed_prot(vma, address,
> - __pfn_to_pfn_t(pfn, PFN_DEV),
> - prot);
> - else
> - ret = vmf_insert_pfn_prot(vma, address, pfn, prot);
> + ret = vmf_insert_pfn_prot(vma, address, pfn, prot);
>
> /* Never error on prefaulted PTEs */
> if (unlikely((ret & VM_FAULT_ERROR))) {
> @@ -555,18 +550,14 @@ static void ttm_bo_mmap_vma_setup(struct ttm_buffer_object *bo, struct vm_area_s
> * Note: We're transferring the bo reference to
> * vma->vm_private_data here.
> */
> -
> vma->vm_private_data = bo;
>
> /*
> - * We'd like to use VM_PFNMAP on shared mappings, where
> - * (vma->vm_flags & VM_SHARED) != 0, for performance reasons,
> - * but for some reason VM_PFNMAP + x86 PAT + write-combine is very
> - * bad for performance. Until that has been sorted out, use
> - * VM_MIXEDMAP on all mappings. See freedesktop.org bug #75719
> + * PFNMAP forces us to block COW mappings in mmap(),
> + * and with MIXEDMAP we would incorrectly allow fast gup
> + * on TTM memory on architectures that don't have pte_special.
> */
> - vma->vm_flags |= VM_MIXEDMAP;
> - vma->vm_flags |= VM_IO | VM_DONTEXPAND | VM_DONTDUMP;
> + vma->vm_flags |= VM_PFNMAP | VM_IO | VM_DONTEXPAND | VM_DONTDUMP;
> }
>
> int ttm_bo_mmap(struct file *filp, struct vm_area_struct *vma,
> @@ -579,6 +570,9 @@ int ttm_bo_mmap(struct file *filp, struct vm_area_struct *vma,
> if (unlikely(vma->vm_pgoff < DRM_FILE_PAGE_OFFSET_START))
> return -EINVAL;
>
> + if (unlikely(is_cow_mapping(vma->vm_flags)))
> + return -EINVAL;
> +
> bo = ttm_bo_vm_lookup(bdev, vma->vm_pgoff, vma_pages(vma));
> if (unlikely(!bo))
> return -EINVAL;
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 77e64e3eac80..c6ebf7f9ddbb 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -686,6 +686,11 @@ static inline bool vma_is_accessible(struct vm_area_struct *vma)
> return vma->vm_flags & VM_ACCESS_FLAGS;
> }
>
> +static inline bool is_cow_mapping(vm_flags_t flags)
> +{
> + return (flags & (VM_SHARED | VM_MAYWRITE)) == VM_MAYWRITE;
> +}
> +
> #ifdef CONFIG_SHMEM
> /*
> * The vma_is_shmem is not inline because it is used only by slow
> diff --git a/mm/internal.h b/mm/internal.h
> index 9902648f2206..1432feec62df 100644
> --- a/mm/internal.h
> +++ b/mm/internal.h
> @@ -296,11 +296,6 @@ static inline unsigned int buddy_order(struct page *page)
> */
> #define buddy_order_unsafe(page) READ_ONCE(page_private(page))
>
> -static inline bool is_cow_mapping(vm_flags_t flags)
> -{
> - return (flags & (VM_SHARED | VM_MAYWRITE)) == VM_MAYWRITE;
> -}
> -
> /*
> * These three helpers classifies VMAs for virtual memory accounting.
> */
Powered by blists - more mailing lists