[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241210194745.7a0a319e@kernel.org>
Date: Tue, 10 Dec 2024 19:47:45 -0800
From: Jakub Kicinski <kuba@...nel.org>
To: Mina Almasry <almasrymina@...gle.com>
Cc: netdev@...r.kernel.org, Pavel Begunkov <asml.silence@...il.com>, Kaiyuan
Zhang <kaiyuanz@...gle.com>, Willem de Bruijn <willemb@...gle.com>,
Samiullah Khawaja <skhawaja@...gle.com>, linux-doc@...r.kernel.org,
linux-kernel@...r.kernel.org, "David S. Miller" <davem@...emloft.net>, Eric
Dumazet <edumazet@...gle.com>, Paolo Abeni <pabeni@...hat.com>, Simon
Horman <horms@...nel.org>, Jonathan Corbet <corbet@....net>, Jesper
Dangaard Brouer <hawk@...nel.org>, Ilias Apalodimas
<ilias.apalodimas@...aro.org>
Subject: Re: [PATCH net-next v3 4/5] page_pool: disable sync for cpu for
dmabuf memory provider
On Mon, 9 Dec 2024 17:23:07 +0000 Mina Almasry wrote:
> -static inline void page_pool_dma_sync_for_cpu(const struct page_pool *pool,
> - const struct page *page,
> - u32 offset, u32 dma_sync_size)
> +static inline void
> +page_pool_dma_sync_netmem_for_cpu(const struct page_pool *pool,
> + const netmem_ref netmem, u32 offset,
> + u32 dma_sync_size)
> {
> + if (pool->mp_priv)
Let's add a dedicated bit to skip sync. The io-uring support feels
quite close. Let's not force those guys to have to rejig this.
> + return;
> +
> dma_sync_single_range_for_cpu(pool->p.dev,
> - page_pool_get_dma_addr(page),
> + page_pool_get_dma_addr_netmem(netmem),
> offset + pool->p.offset, dma_sync_size,
> page_pool_get_dma_dir(pool));
> }
>
> +static inline void page_pool_dma_sync_for_cpu(const struct page_pool *pool,
> + struct page *page, u32 offset,
> + u32 dma_sync_size)
> +{
> + page_pool_dma_sync_netmem_for_cpu(pool, page_to_netmem(page), offset,
> + dma_sync_size);
I have the feeling Olek won't thank us for this extra condition and
bit clearing. If driver calls page_pool_dma_sync_for_cpu() we don't
have to check the new bit / mp_priv. Let's copy & paste the
dma_sync_single_range_for_cpu() call directly here.
Powered by blists - more mailing lists