lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <db813035-fb38-4fc3-b91e-d1416959db13@gmail.com>
Date: Sat, 15 Mar 2025 17:46:21 +0800
From: Yunsheng Lin <yunshenglin0825@...il.com>
To: Toke Høiland-Jørgensen <toke@...hat.com>,
 "David S. Miller" <davem@...emloft.net>, Jakub Kicinski <kuba@...nel.org>,
 Jesper Dangaard Brouer <hawk@...nel.org>, Saeed Mahameed
 <saeedm@...dia.com>, Leon Romanovsky <leon@...nel.org>,
 Tariq Toukan <tariqt@...dia.com>, Andrew Lunn <andrew+netdev@...n.ch>,
 Eric Dumazet <edumazet@...gle.com>, Paolo Abeni <pabeni@...hat.com>,
 Ilias Apalodimas <ilias.apalodimas@...aro.org>,
 Simon Horman <horms@...nel.org>, Andrew Morton <akpm@...ux-foundation.org>,
 Mina Almasry <almasrymina@...gle.com>, Yonglong Liu
 <liuyonglong@...wei.com>, Yunsheng Lin <linyunsheng@...wei.com>,
 Pavel Begunkov <asml.silence@...il.com>, Matthew Wilcox
 <willy@...radead.org>, Robin Murphy <robin.murphy@....com>,
 IOMMU <iommu@...ts.linux.dev>
Cc: netdev@...r.kernel.org, bpf@...r.kernel.org, linux-rdma@...r.kernel.org,
 linux-mm@...ck.org, Qiuling Ren <qren@...hat.com>,
 Yuying Ma <yuma@...hat.com>
Subject: Re: [PATCH net-next 3/3] page_pool: Track DMA-mapped pages and unmap
 them when destroying the pool

On 3/14/2025 6:10 PM, Toke Høiland-Jørgensen wrote:

...

> 
> To avoid having to walk the entire xarray on unmap to find the page
> reference, we stash the ID assigned by xa_alloc() into the page
> structure itself, using the upper bits of the pp_magic field. This
> requires a couple of defines to avoid conflicting with the
> POINTER_POISON_DELTA define, but this is all evaluated at compile-time,
> so does not affect run-time performance. The bitmap calculations in this
> patch gives the following number of bits for different architectures:
> 
> - 24 bits on 32-bit architectures
> - 21 bits on PPC64 (because of the definition of ILLEGAL_POINTER_VALUE)
> - 32 bits on other 64-bit architectures

 From commit c07aea3ef4d4 ("mm: add a signature in struct page"):
"The page->signature field is aliased to page->lru.next and
page->compound_head, but it can't be set by mistake because the
signature value is a bad pointer, and can't trigger a false positive
in PageTail() because the last bit is 0."

And commit 8a5e5e02fc83 ("include/linux/poison.h: fix LIST_POISON{1,2} 
offset"):
"Poison pointer values should be small enough to find a room in
non-mmap'able/hardly-mmap'able space."

So the question seems to be:
1. Is stashing the ID causing page->pp_magic to be in the mmap'able/
    easier-mmap'able space? If yes, how can we make sure this will not
    cause any security problem?
2. Is the masking the page->pp_magic causing a valid pionter for
    page->lru.next or page->compound_head to be treated as a vaild
    PP_SIGNATURE? which might cause page_pool to recycle a page not
    allocated via page_pool.

> 
> Since all the tracking is performed on DMA map/unmap, no additional code
> is needed in the fast path, meaning the performance overhead of this
> tracking is negligible. A micro-benchmark shows that the total overhead
> of using xarray for this purpose is about 400 ns (39 cycles(tsc) 395.218
> ns; sum for both map and unmap[1]). Since this cost is only paid on DMA
> map and unmap, it seems like an acceptable cost to fix the late unmap

For most use cases when PP_FLAG_DMA_MAP is set and IOMMU is off, the
DMA map and unmap operation is almost negligible as said below, so the
cost is about 200% performance degradation, which doesn't seems like an
acceptable cost.

> issue. Further optimisation can narrow the cases where this cost is
> paid (for instance by eliding the tracking when DMA map/unmap is a
> no-op).

The above was discussed in [1] and brought up again in [2], so cc
Robin to see if there is any clarifying to see if he still view the
above as misuse of DMA API.

1. 
https://lore.kernel.org/all/9a4d1357-f30d-420d-a575-7ae305ca6dda@huawei.com/

2. https://lore.kernel.org/all/caf31b5e-0e8f-4844-b7ba-ef59ed13b74e@arm.com/

> 
> The extra memory needed to track the pages is neatly encapsulated inside
> xarray, which uses the 'struct xa_node' structure to track items. This
> structure is 576 bytes long, with slots for 64 items, meaning that a
> full node occurs only 9 bytes of overhead per slot it tracks (in
> practice, it probably won't be this efficient, but in any case it should

Is there any debug infrastructure to know if it is not this efficient?
as there may be 576 byte overhead for a page for the worst case.

> be an acceptable overhead).
 > > [0] 
https://lore.kernel.org/all/CAHS8izPg7B5DwKfSuzz-iOop_YRbk3Sd6Y4rX7KBG9DcVJcyWg@mail.gmail.com/
> [1] https://lore.kernel.org/r/ae07144c-9295-4c9d-a400-153bb689fe9e@huawei.com
> 
> Reported-by: Yonglong Liu <liuyonglong@...wei.com>
> Closes: https://lore.kernel.org/r/8743264a-9700-4227-a556-5f931c720211@huawei.com
> Fixes: ff7d6b27f894 ("page_pool: refurbish version of page_pool code")
> Suggested-by: Mina Almasry <almasrymina@...gle.com>
> Reviewed-by: Mina Almasry <almasrymina@...gle.com>
> Reviewed-by: Jesper Dangaard Brouer <hawk@...nel.org>
> Tested-by: Jesper Dangaard Brouer <hawk@...nel.org>
> Tested-by: Qiuling Ren <qren@...hat.com>
> Tested-by: Yuying Ma <yuma@...hat.com>
> Signed-off-by: Toke Høiland-Jørgensen <toke@...hat.com>

...

> @@ -1084,8 +1112,32 @@ static void page_pool_empty_alloc_cache_once(struct page_pool *pool)
>   
>   static void page_pool_scrub(struct page_pool *pool)
>   {
> +	unsigned long id;
> +	void *ptr;
> +
>   	page_pool_empty_alloc_cache_once(pool);
> -	pool->destroy_cnt++;
> +	if (!pool->destroy_cnt++ && pool->dma_map) {
> +		if (pool->dma_sync) {
> +			/* paired with READ_ONCE in
> +			 * page_pool_dma_sync_for_device() and
> +			 * __page_pool_dma_sync_for_cpu()
> +			 */
> +			WRITE_ONCE(pool->dma_sync, false);
> +
> +			/* Make sure all concurrent returns that may see the old
> +			 * value of dma_sync (and thus perform a sync) have
> +			 * finished before doing the unmapping below. Skip the
> +			 * wait if the device doesn't actually need syncing, or
> +			 * if there are no outstanding mapped pages.
> +			 */
> +			if (dma_dev_need_sync(pool->p.dev) &&
> +			    !xa_empty(&pool->dma_mapped))
> +				synchronize_net();

I guess the above synchronize_net() is assuming that the above dma sync
API is always called in the softirq context, as it seems there is no
rcu read lock added in this patch to be paired with that.

Doesn't page_pool_put_page() might be called in non-softirq context when
allow_direct is false and in_softirq() returns false?

> +		}
> +
> +		xa_for_each(&pool->dma_mapped, id, ptr)
> +			__page_pool_release_page_dma(pool, page_to_netmem(ptr));
> +	}
>   
>   	/* No more consumers should exist, but producers could still
>   	 * be in-flight.
> 


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ