lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <1b8e2681-ccd6-81e0-b696-8b6c26e31f26@huawei.com> Date: Mon, 21 Aug 2023 20:18:55 +0800 From: Yunsheng Lin <linyunsheng@...wei.com> To: Jakub Kicinski <kuba@...nel.org>, Ilias Apalodimas <ilias.apalodimas@...aro.org> CC: Mina Almasry <almasrymina@...gle.com>, <davem@...emloft.net>, <pabeni@...hat.com>, <netdev@...r.kernel.org>, <linux-kernel@...r.kernel.org>, Lorenzo Bianconi <lorenzo@...nel.org>, Alexander Duyck <alexander.duyck@...il.com>, Liang Chen <liangchen.linux@...il.com>, Alexander Lobakin <aleksander.lobakin@...el.com>, Saeed Mahameed <saeedm@...dia.com>, Leon Romanovsky <leon@...nel.org>, Eric Dumazet <edumazet@...gle.com>, Jesper Dangaard Brouer <hawk@...nel.org> Subject: Re: [PATCH net-next v7 1/6] page_pool: frag API support for 32-bit arch with 64-bit DMA On 2023/8/19 5:51, Jakub Kicinski wrote: > On Fri, 18 Aug 2023 09:12:09 +0300 Ilias Apalodimas wrote: >>> Right, IIUC we don't have enough space to fit dma_addr_t and the >>> refcount, but if we store the dma addr on a shifted u32 instead >>> of using dma_addr_t explicitly - the refcount should fit? >> >> struct page looks like this: >> >> unsigned long dma_addr; >> union { >> unsigned long dma_addr_upper; >> atomic_long_t pp_frag_count; >> }; > > I could be completely misunderstanding the problem. > Let me show you the diff of what I was thinking more or less. > > diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h > index 5e74ce4a28cd..58ffa8dc745f 100644 > --- a/include/linux/mm_types.h > +++ b/include/linux/mm_types.h > @@ -126,11 +126,6 @@ struct page { > unsigned long _pp_mapping_pad; > unsigned long dma_addr; > union { > - /** > - * dma_addr_upper: might require a 64-bit > - * value on 32-bit architectures. > - */ > - unsigned long dma_addr_upper; > /** > * For frag page support, not supported in > * 32-bit architectures with 64-bit DMA. > diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/helpers.h > index 94231533a369..6f87a0fa2178 100644 > --- a/include/net/page_pool/helpers.h > +++ b/include/net/page_pool/helpers.h > @@ -212,16 +212,24 @@ static inline dma_addr_t page_pool_get_dma_addr(struct page *page) > dma_addr_t ret = page->dma_addr; > > if (PAGE_POOL_DMA_USE_PP_FRAG_COUNT) > - ret |= (dma_addr_t)page->dma_addr_upper << 16 << 16; > + ret <<= PAGE_SHIFT; > > return ret; > } > > -static inline void page_pool_set_dma_addr(struct page *page, dma_addr_t addr) > +static inline bool page_pool_set_dma_addr(struct page *page, dma_addr_t addr) > { > + bool failed = false; > + > page->dma_addr = addr; > - if (PAGE_POOL_DMA_USE_PP_FRAG_COUNT) > - page->dma_addr_upper = upper_32_bits(addr); > + if (PAGE_POOL_DMA_USE_PP_FRAG_COUNT) { > + page->dma_addr >>= PAGE_SHIFT; > + /* We assume page alignment to shave off bottom bits, > + * if this "compression" doesn't work we need to drop. > + */ > + failed = addr != page->dma_addr << PAGE_SHIFT; > + } > + return failed; > } > > static inline bool page_pool_put(struct page_pool *pool) > diff --git a/net/core/page_pool.c b/net/core/page_pool.c > index 77cb75e63aca..9ea42e242a89 100644 > --- a/net/core/page_pool.c > +++ b/net/core/page_pool.c > @@ -211,10 +211,6 @@ static int page_pool_init(struct page_pool *pool, > */ > } > > - if (PAGE_POOL_DMA_USE_PP_FRAG_COUNT && > - pool->p.flags & PP_FLAG_PAGE_FRAG) > - return -EINVAL; > - > #ifdef CONFIG_PAGE_POOL_STATS > pool->recycle_stats = alloc_percpu(struct page_pool_recycle_stats); > if (!pool->recycle_stats) > @@ -359,12 +355,19 @@ static bool page_pool_dma_map(struct page_pool *pool, struct page *page) > if (dma_mapping_error(pool->p.dev, dma)) > return false; > > - page_pool_set_dma_addr(page, dma); > + if (page_pool_set_dma_addr(page, dma)) > + goto unmap_failed; What does the driver do when the above fails? Does the driver still need to implement a fallback for 32 bit arch with dma addr with more than 32 + 12 bits? If yes, it does not seems to be very helpful from driver's point of view as the driver might still need to call page allocator API directly when the above fails. > > if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV) > page_pool_dma_sync_for_device(pool, page, pool->p.max_len); > > return true; > + > +unmap_failed: > + dma_unmap_page_attrs(pool->p.dev, dma, > + PAGE_SIZE << pool->p.order, pool->p.dma_dir, > + DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING); > + return false; > } > > static void page_pool_set_pp_info(struct page_pool *pool, > . >
Powered by blists - more mailing lists