[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191113092907.569f6b8e@carbon>
Date: Wed, 13 Nov 2019 09:29:07 +0100
From: Jesper Dangaard Brouer <brouer@...hat.com>
To: Lorenzo Bianconi <lorenzo@...nel.org>
Cc: netdev@...r.kernel.org, lorenzo.bianconi@...hat.com,
davem@...emloft.net, thomas.petazzoni@...tlin.com,
ilias.apalodimas@...aro.org, matteo.croce@...hat.com,
brouer@...hat.com
Subject: Re: [PATCH net-next 2/3] net: page_pool: add the possibility to
sync DMA memory for non-coherent devices
On Sun, 10 Nov 2019 14:09:09 +0200
Lorenzo Bianconi <lorenzo@...nel.org> wrote:
> diff --git a/include/net/page_pool.h b/include/net/page_pool.h
> index 2cbcdbdec254..defbfd90ab46 100644
[...]
> @@ -150,8 +153,8 @@ static inline void page_pool_destroy(struct page_pool *pool)
> }
>
> /* Never call this directly, use helpers below */
> -void __page_pool_put_page(struct page_pool *pool,
> - struct page *page, bool allow_direct);
> +void __page_pool_put_page(struct page_pool *pool, struct page *page,
> + unsigned int dma_sync_size, bool allow_direct);
>
> static inline void page_pool_put_page(struct page_pool *pool,
> struct page *page, bool allow_direct)
> @@ -160,14 +163,14 @@ static inline void page_pool_put_page(struct page_pool *pool,
> * allow registering MEM_TYPE_PAGE_POOL, but shield linker.
> */
> #ifdef CONFIG_PAGE_POOL
> - __page_pool_put_page(pool, page, allow_direct);
> + __page_pool_put_page(pool, page, 0, allow_direct);
> #endif
> }
> /* Very limited use-cases allow recycle direct */
> static inline void page_pool_recycle_direct(struct page_pool *pool,
> struct page *page)
> {
> - __page_pool_put_page(pool, page, true);
> + __page_pool_put_page(pool, page, 0, true);
> }
We need to use another "default" value than zero for 'dma_sync_size' in
above calls. I suggest either 0xFFFFFFFF or -1 (which unsigned is
0xFFFFFFFF).
Point is that in case caller doesn't know the length (the CPU have had
access to) then page_pool will need to sync with pool->p.max_len.
If choosing a larger value here default value your code below takes
care of it via min(dma_sync_size, pool->p.max_len).
> /* API user MUST have disconnected alloc-side (not allowed to call
> diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> index 5bc65587f1c4..af9514c2d15b 100644
> --- a/net/core/page_pool.c
> +++ b/net/core/page_pool.c
> @@ -112,6 +112,17 @@ static struct page
> *__page_pool_get_cached(struct page_pool *pool) return page;
> }
>
> +/* Used for non-coherent devices */
> +static void page_pool_dma_sync_for_device(struct page_pool *pool,
> + struct page *page,
> + unsigned int dma_sync_size)
> +{
> + dma_sync_size = min(dma_sync_size, pool->p.max_len);
> + dma_sync_single_range_for_device(pool->p.dev, page->dma_addr,
> + pool->p.offset, dma_sync_size,
> + pool->p.dma_dir);
> +}
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
Powered by blists - more mailing lists