lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Mon, 28 Aug 2023 07:50:33 -0700
From: Alexander Duyck <alexander.duyck@...il.com>
To: Jakub Kicinski <kuba@...nel.org>
Cc: Yunsheng Lin <linyunsheng@...wei.com>, Ilias Apalodimas <ilias.apalodimas@...aro.org>, 
	Mina Almasry <almasrymina@...gle.com>, davem@...emloft.net, pabeni@...hat.com, 
	netdev@...r.kernel.org, linux-kernel@...r.kernel.org, 
	Lorenzo Bianconi <lorenzo@...nel.org>, Liang Chen <liangchen.linux@...il.com>, 
	Alexander Lobakin <aleksander.lobakin@...el.com>, Saeed Mahameed <saeedm@...dia.com>, 
	Leon Romanovsky <leon@...nel.org>, Eric Dumazet <edumazet@...gle.com>, 
	Jesper Dangaard Brouer <hawk@...nel.org>
Subject: Re: [PATCH net-next v7 1/6] page_pool: frag API support for 32-bit
 arch with 64-bit DMA

On Fri, Aug 25, 2023 at 5:08 PM Jakub Kicinski <kuba@...nel.org> wrote:
>
> On Fri, 25 Aug 2023 17:40:43 +0800 Yunsheng Lin wrote:
> > > One additional thing we could consider would be to simply look at
> > > having page_pool enforce a DMA mask for the device to address any
> > > cases where we might not be able to fit the address. Then in the
> > > unlikely event that somebody is running a 32b system with over 16
> > > terabytes of RAM. With that the DMA subsystem would handle it for us
> > > and we wouldn't have to worry so much about it.
> >
> > It seems there is a API to acquire the DMA mask used by the device:
> > https://elixir.free-electrons.com/linux/v6.4-rc6/source/include/linux/dma-mapping.h#L434
> >
> > Is it possible to use that to check if DMA mask used by the device is
> > within 32 + PAGE_SHIFT limit, if yes, we use jakub's proposal to reduce
> > reduce the dma address bit, if no, we fail the page_pool creation?
>
> IMO you're making this unnecessarily complicated. We can set the masks
> in page pool core or just handle the allocation failure like my patch
> does and worry about the very unlikely case when someone reports actual
> problems.

Actually we could keep it pretty simple. We just have to create a
#define using DMA_BIT_MASK for the size of the page pool DMA. We could
name it something like PP_DMA_BIT_MASK. The drivers would just have to
use that to define their bit mask when they call
dma_set_mask_and_coherent. In that case the DMA API would switch to
bounce buffers automatically in cases where the page DMA address would
be out of bounds.

The other tweak we could look at doing would be to just look at the
dma_get_required_mask and add a warning and/or fail to load page pool
on systems where the page pool would not be able to process that when
ANDed with the device dma mask.

With those two changes the setup should be rock solid in terms of any
risks of the DMA address being out of bounds, and with minimal
performance impact as we would have verified all possibilities before
we even get into the hot path.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ