[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <30862242-293b-f42f-d8ce-2c31a52e3697@redhat.com>
Date: Thu, 8 Apr 2021 11:25:49 +0800
From: Jason Wang <jasowang@...hat.com>
To: Xie Yongji <xieyongji@...edance.com>, mst@...hat.com,
stefanha@...hat.com, sgarzare@...hat.com, parav@...dia.com,
hch@...radead.org, christian.brauner@...onical.com,
rdunlap@...radead.org, willy@...radead.org,
viro@...iv.linux.org.uk, axboe@...nel.dk, bcrl@...ck.org,
corbet@....net, mika.penttila@...tfour.com,
dan.carpenter@...cle.com
Cc: virtualization@...ts.linux-foundation.org, netdev@...r.kernel.org,
kvm@...r.kernel.org, linux-fsdevel@...r.kernel.org
Subject: Re: [PATCH v6 08/10] vduse: Implement an MMU-based IOMMU driver
在 2021/3/31 下午4:05, Xie Yongji 写道:
> This implements an MMU-based IOMMU driver to support mapping
> kernel dma buffer into userspace. The basic idea behind it is
> treating MMU (VA->PA) as IOMMU (IOVA->PA). The driver will set
> up MMU mapping instead of IOMMU mapping for the DMA transfer so
> that the userspace process is able to use its virtual address to
> access the dma buffer in kernel.
>
> And to avoid security issue, a bounce-buffering mechanism is
> introduced to prevent userspace accessing the original buffer
> directly.
>
> Signed-off-by: Xie Yongji <xieyongji@...edance.com>
Acked-by: Jason Wang <jasowang@...hat.com>
With some nits:
> ---
> drivers/vdpa/vdpa_user/iova_domain.c | 521 +++++++++++++++++++++++++++++++++++
> drivers/vdpa/vdpa_user/iova_domain.h | 70 +++++
> 2 files changed, 591 insertions(+)
> create mode 100644 drivers/vdpa/vdpa_user/iova_domain.c
> create mode 100644 drivers/vdpa/vdpa_user/iova_domain.h
[...]
> +static void vduse_domain_bounce(struct vduse_iova_domain *domain,
> + dma_addr_t iova, size_t size,
> + enum dma_data_direction dir)
> +{
> + struct vduse_bounce_map *map;
> + unsigned int offset;
> + void *addr;
> + size_t sz;
> +
> + while (size) {
> + map = &domain->bounce_maps[iova >> PAGE_SHIFT];
> + offset = offset_in_page(iova);
> + sz = min_t(size_t, PAGE_SIZE - offset, size);
> +
> + if (WARN_ON(!map->bounce_page ||
> + map->orig_phys == INVALID_PHYS_ADDR))
> + return;
> +
> + addr = page_address(map->bounce_page) + offset;
> + do_bounce(map->orig_phys + offset, addr, sz, dir);
> + size -= sz;
> + iova += sz;
> + }
> +}
> +
> +static struct page *
> +vduse_domain_get_mapping_page(struct vduse_iova_domain *domain, u64 iova)
It's better to rename this as "vduse_domain_get_coherent_page?".
> +{
> + u64 start = iova & PAGE_MASK;
> + u64 last = start + PAGE_SIZE - 1;
> + struct vhost_iotlb_map *map;
> + struct page *page = NULL;
> +
> + spin_lock(&domain->iotlb_lock);
> + map = vhost_iotlb_itree_first(domain->iotlb, start, last);
> + if (!map)
> + goto out;
> +
> + page = pfn_to_page((map->addr + iova - map->start) >> PAGE_SHIFT);
> + get_page(page);
> +out:
> + spin_unlock(&domain->iotlb_lock);
> +
> + return page;
> +}
> +
[...]
> +
> +static dma_addr_t
> +vduse_domain_alloc_iova(struct iova_domain *iovad,
> + unsigned long size, unsigned long limit)
> +{
> + unsigned long shift = iova_shift(iovad);
> + unsigned long iova_len = iova_align(iovad, size) >> shift;
> + unsigned long iova_pfn;
> +
> + if (iova_len < (1 << (IOVA_RANGE_CACHE_MAX_SIZE - 1)))
> + iova_len = roundup_pow_of_two(iova_len);
Let's add a comment as what has been done in dma-iommu.c?
(In the future, it looks to me it's better to move them to
alloc_iova_fast()).
Thanks
Powered by blists - more mailing lists