lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Wed, 20 Nov 2019 20:00:38 +0200 From: Ilias Apalodimas <ilias.apalodimas@...aro.org> To: Jesper Dangaard Brouer <brouer@...hat.com> Cc: Lorenzo Bianconi <lorenzo@...nel.org>, netdev@...r.kernel.org, davem@...emloft.net, lorenzo.bianconi@...hat.com, mcroce@...hat.com, jonathan.lemon@...il.com Subject: Re: [PATCH v5 net-next 2/3] net: page_pool: add the possibility to sync DMA memory for device > [...] > > @@ -281,8 +309,8 @@ static bool __page_pool_recycle_direct(struct page *page, > > return true; > > } > > > > -void __page_pool_put_page(struct page_pool *pool, > > - struct page *page, bool allow_direct) > > +void __page_pool_put_page(struct page_pool *pool, struct page *page, > > + unsigned int dma_sync_size, bool allow_direct) > > { > > /* This allocator is optimized for the XDP mode that uses > > * one-frame-per-page, but have fallbacks that act like the > > @@ -293,6 +321,10 @@ void __page_pool_put_page(struct page_pool *pool, > > if (likely(page_ref_count(page) == 1)) { > > /* Read barrier done in page_ref_count / READ_ONCE */ > > > > + if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV) > > + page_pool_dma_sync_for_device(pool, page, > > + dma_sync_size); > > + > > if (allow_direct && in_serving_softirq()) > > if (__page_pool_recycle_direct(page, pool)) > > return; > > I am slightly concerned this touch the fast-path code. But at-least on > Intel, I don't think this is measurable. And for the ARM64 board it > was a huge win... thus I'll accept this. Acked-by: Ilias Apalodimas <ilias.apalodimas@...aro.org>
Powered by blists - more mailing lists