[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6e33b687-8862-d208-a707-77a95c61525e@gmail.com>
Date: Sat, 8 Oct 2022 14:08:25 +0300
From: Xenia Ragiadakou <burzalodowa@...il.com>
To: Oleksandr Tyshchenko <olekstysh@...il.com>,
xen-devel@...ts.xenproject.org, linux-kernel@...r.kernel.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@...m.com>,
Stefano Stabellini <sstabellini@...nel.org>,
Juergen Gross <jgross@...e.com>
Subject: Re: [PATCH] xen/virtio: Handle cases when page offset > PAGE_SIZE
properly
On 10/7/22 16:27, Oleksandr Tyshchenko wrote:
Hi Oleksandr
> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@...m.com>
>
> Passed to xen_grant_dma_map_page() offset in the page
> can be > PAGE_SIZE even if the guest uses the same page granularity
> as Xen (4KB).
>
> Before current patch, if such case happened we ended up providing
> grants for the whole region in xen_grant_dma_map_page() which
> was really unnecessary. The more, we ended up not releasing all
> grants which represented that region in xen_grant_dma_unmap_page().
>
> Current patch updates the code to be able to deal with such cases.
>
> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@...m.com>
> ---
> Cc: Juergen Gross <jgross@...e.com>
> Cc: Xenia Ragiadakou <burzalodowa@...il.com>
>
> Depens on:
> https://lore.kernel.org/xen-devel/20221005174823.1800761-1-olekstysh@gmail.com/
>
> Should go in only after that series.
> ---
> drivers/xen/grant-dma-ops.c | 8 +++++---
> 1 file changed, 5 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/xen/grant-dma-ops.c b/drivers/xen/grant-dma-ops.c
> index c66f56d24013..1385f0e686fe 100644
> --- a/drivers/xen/grant-dma-ops.c
> +++ b/drivers/xen/grant-dma-ops.c
> @@ -168,7 +168,9 @@ static dma_addr_t xen_grant_dma_map_page(struct device *dev, struct page *page,
> unsigned long attrs)
> {
> struct xen_grant_dma_data *data;
> - unsigned int i, n_pages = PFN_UP(offset + size);
> + unsigned long dma_offset = offset_in_page(offset),
> + gfn_offset = PFN_DOWN(offset);
> + unsigned int i, n_pages = PFN_UP(dma_offset + size);
IIUC, the above with a later patch will become:
dma_offset = xen_offset_in_page(offset)
gfn_offset = XEN_PFN_DOWN(offset)
n_pages = XEN_PFN_UP(dma_offset + size)
> grant_ref_t grant;
> dma_addr_t dma_handle;
>
> @@ -187,10 +189,10 @@ static dma_addr_t xen_grant_dma_map_page(struct device *dev, struct page *page,
>
> for (i = 0; i < n_pages; i++) {
> gnttab_grant_foreign_access_ref(grant + i, data->backend_domid,
> - xen_page_to_gfn(page) + i, dir == DMA_TO_DEVICE);
> + xen_page_to_gfn(page) + i + gfn_offset, dir == DMA_TO_DEVICE);
Here, why the pfn is not calculated before passing it to pfn_to_gfn()?
I mean sth like pfn_to_gfn(page_to_xen_pfn(page) + gfn_offset + i)
> }
>
> - dma_handle = grant_to_dma(grant) + offset;
> + dma_handle = grant_to_dma(grant) + dma_offset;
>
> return dma_handle;
> }
--
Xenia
Powered by blists - more mailing lists