lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a68cedfb-cd9e-4b93-a99e-ae30b9c837eb@intel.com>
Date: Fri, 25 Oct 2024 17:02:44 +0200
From: Alexander Lobakin <aleksander.lobakin@...el.com>
To: Petr Machata <petrm@...dia.com>, Amit Cohen <amcohen@...dia.com>
CC: <netdev@...r.kernel.org>, Andrew Lunn <andrew+netdev@...n.ch>, "David S.
 Miller" <davem@...emloft.net>, Eric Dumazet <edumazet@...gle.com>, "Jakub
 Kicinski" <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>, Simon Horman
	<horms@...nel.org>, Danielle Ratson <danieller@...dia.com>, Ido Schimmel
	<idosch@...dia.com>, <mlxsw@...dia.com>, Jiri Pirko <jiri@...nulli.us>
Subject: Re: [PATCH net 3/5] mlxsw: pci: Sync Rx buffers for device

From: Petr Machata <petrm@...dia.com>
Date: Fri, 25 Oct 2024 16:26:27 +0200

> From: Amit Cohen <amcohen@...dia.com>
> 
> Non-coherent architectures, like ARM, may require invalidating caches
> before the device can use the DMA mapped memory, which means that before
> posting pages to device, drivers should sync the memory for device.
> 
> Sync for device can be configured as page pool responsibility. Set the
> relevant flag and define max_len for sync.
> 
> Cc: Jiri Pirko <jiri@...nulli.us>
> Fixes: b5b60bb491b2 ("mlxsw: pci: Use page pool for Rx buffers allocation")
> Signed-off-by: Amit Cohen <amcohen@...dia.com>
> Reviewed-by: Ido Schimmel <idosch@...dia.com>
> Signed-off-by: Petr Machata <petrm@...dia.com>
> ---
>  drivers/net/ethernet/mellanox/mlxsw/pci.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/net/ethernet/mellanox/mlxsw/pci.c b/drivers/net/ethernet/mellanox/mlxsw/pci.c
> index 2320a5f323b4..d6f37456fb31 100644
> --- a/drivers/net/ethernet/mellanox/mlxsw/pci.c
> +++ b/drivers/net/ethernet/mellanox/mlxsw/pci.c
> @@ -996,12 +996,13 @@ static int mlxsw_pci_cq_page_pool_init(struct mlxsw_pci_queue *q,
>  	if (cq_type != MLXSW_PCI_CQ_RDQ)
>  		return 0;
>  
> -	pp_params.flags = PP_FLAG_DMA_MAP;
> +	pp_params.flags = PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV;
>  	pp_params.pool_size = MLXSW_PCI_WQE_COUNT * mlxsw_pci->num_sg_entries;
>  	pp_params.nid = dev_to_node(&mlxsw_pci->pdev->dev);
>  	pp_params.dev = &mlxsw_pci->pdev->dev;
>  	pp_params.napi = &q->u.cq.napi;
>  	pp_params.dma_dir = DMA_FROM_DEVICE;
> +	pp_params.max_len = PAGE_SIZE;

max_len is the maximum HW-writable area of a buffer. Headroom and
tailroom must be excluded. In your case

	pp_params.max_len = PAGE_SIZE - MLXSW_PCI_RX_BUF_SW_OVERHEAD;

>  
>  	page_pool = page_pool_create(&pp_params);
>  	if (IS_ERR(page_pool))

Thanks,
Olek

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ