lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <CAC_iWjKp_NKofQQTSgA810+bOt84Hgbm3YV=X=JWH9t=DHuzqQ@mail.gmail.com> Date: Mon, 21 Aug 2023 11:38:32 +0300 From: Ilias Apalodimas <ilias.apalodimas@...aro.org> To: Jakub Kicinski <kuba@...nel.org> Cc: Mina Almasry <almasrymina@...gle.com>, Yunsheng Lin <linyunsheng@...wei.com>, davem@...emloft.net, pabeni@...hat.com, netdev@...r.kernel.org, linux-kernel@...r.kernel.org, Lorenzo Bianconi <lorenzo@...nel.org>, Alexander Duyck <alexander.duyck@...il.com>, Liang Chen <liangchen.linux@...il.com>, Alexander Lobakin <aleksander.lobakin@...el.com>, Saeed Mahameed <saeedm@...dia.com>, Leon Romanovsky <leon@...nel.org>, Eric Dumazet <edumazet@...gle.com>, Jesper Dangaard Brouer <hawk@...nel.org> Subject: Re: [PATCH net-next v7 1/6] page_pool: frag API support for 32-bit arch with 64-bit DMA resending for the mailing list apologies for the noise. On Sat, 19 Aug 2023 at 00:51, Jakub Kicinski <kuba@...nel.org> wrote: > > On Fri, 18 Aug 2023 09:12:09 +0300 Ilias Apalodimas wrote: > > > Right, IIUC we don't have enough space to fit dma_addr_t and the > > > refcount, but if we store the dma addr on a shifted u32 instead > > > of using dma_addr_t explicitly - the refcount should fit? > > > > struct page looks like this: > > > > unsigned long dma_addr; > > union { > > unsigned long dma_addr_upper; > > atomic_long_t pp_frag_count; > > }; > > I could be completely misunderstanding the problem. You aren't! > Let me show you the diff of what I was thinking more or less. > > diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h > index 5e74ce4a28cd..58ffa8dc745f 100644 > --- a/include/linux/mm_types.h > +++ b/include/linux/mm_types.h > @@ -126,11 +126,6 @@ struct page { > unsigned long _pp_mapping_pad; > unsigned long dma_addr; > union { > - /** > - * dma_addr_upper: might require a 64-bit > - * value on 32-bit architectures. > - */ > - unsigned long dma_addr_upper; > /** > * For frag page support, not supported in > * 32-bit architectures with 64-bit DMA. > diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/helpers.h > index 94231533a369..6f87a0fa2178 100644 > --- a/include/net/page_pool/helpers.h > +++ b/include/net/page_pool/helpers.h > @@ -212,16 +212,24 @@ static inline dma_addr_t page_pool_get_dma_addr(struct page *page) > dma_addr_t ret = page->dma_addr; > > if (PAGE_POOL_DMA_USE_PP_FRAG_COUNT) > - ret |= (dma_addr_t)page->dma_addr_upper << 16 << 16; > + ret <<= PAGE_SHIFT; > > return ret; > } > > -static inline void page_pool_set_dma_addr(struct page *page, dma_addr_t addr) > +static inline bool page_pool_set_dma_addr(struct page *page, dma_addr_t addr) > { > + bool failed = false; > + > page->dma_addr = addr; > - if (PAGE_POOL_DMA_USE_PP_FRAG_COUNT) > - page->dma_addr_upper = upper_32_bits(addr); > + if (PAGE_POOL_DMA_USE_PP_FRAG_COUNT) { > + page->dma_addr >>= PAGE_SHIFT; > + /* We assume page alignment to shave off bottom bits, > + * if this "compression" doesn't work we need to drop. > + */ > + failed = addr != page->dma_addr << PAGE_SHIFT; > + } > + return failed; > } > > static inline bool page_pool_put(struct page_pool *pool) > diff --git a/net/core/page_pool.c b/net/core/page_pool.c > index 77cb75e63aca..9ea42e242a89 100644 > --- a/net/core/page_pool.c > +++ b/net/core/page_pool.c > @@ -211,10 +211,6 @@ static int page_pool_init(struct page_pool *pool, > */ > } > > - if (PAGE_POOL_DMA_USE_PP_FRAG_COUNT && > - pool->p.flags & PP_FLAG_PAGE_FRAG) > - return -EINVAL; > - > #ifdef CONFIG_PAGE_POOL_STATS > pool->recycle_stats = alloc_percpu(struct page_pool_recycle_stats); > if (!pool->recycle_stats) > @@ -359,12 +355,19 @@ static bool page_pool_dma_map(struct page_pool *pool, struct page *page) > if (dma_mapping_error(pool->p.dev, dma)) > return false; > > - page_pool_set_dma_addr(page, dma); > + if (page_pool_set_dma_addr(page, dma)) > + goto unmap_failed; > > if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV) > page_pool_dma_sync_for_device(pool, page, pool->p.max_len); > > return true; > + > +unmap_failed: > + dma_unmap_page_attrs(pool->p.dev, dma, > + PAGE_SIZE << pool->p.order, pool->p.dma_dir, > + DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING); > + return false; > } That seems reasonable and would work for pages > 4k as well. But is 16TB enough? I am more familiar with embedded than large servers, which do tend to scale that high. Regards /Ilias > > static void page_pool_set_pp_info(struct page_pool *pool,
Powered by blists - more mailing lists