lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID:
 <BL1PR12MB59225E9CAE5C28E915187515CB492@BL1PR12MB5922.namprd12.prod.outlook.com>
Date: Sun, 27 Oct 2024 06:51:00 +0000
From: Amit Cohen <amcohen@...dia.com>
To: Alexander Lobakin <aleksander.lobakin@...el.com>, Petr Machata
	<petrm@...dia.com>
CC: "netdev@...r.kernel.org" <netdev@...r.kernel.org>, Andrew Lunn
	<andrew+netdev@...n.ch>, "David S. Miller" <davem@...emloft.net>, Eric
 Dumazet <edumazet@...gle.com>, Jakub Kicinski <kuba@...nel.org>, Paolo Abeni
	<pabeni@...hat.com>, Simon Horman <horms@...nel.org>, Danielle Ratson
	<danieller@...dia.com>, Ido Schimmel <idosch@...dia.com>, mlxsw
	<mlxsw@...dia.com>, Jiri Pirko <jiri@...nulli.us>
Subject: RE: [PATCH net 3/5] mlxsw: pci: Sync Rx buffers for device



> -----Original Message-----
> From: Alexander Lobakin <aleksander.lobakin@...el.com>
> Sent: Friday, 25 October 2024 18:03
> To: Petr Machata <petrm@...dia.com>; Amit Cohen <amcohen@...dia.com>
> Cc: netdev@...r.kernel.org; Andrew Lunn <andrew+netdev@...n.ch>; David S. Miller <davem@...emloft.net>; Eric Dumazet
> <edumazet@...gle.com>; Jakub Kicinski <kuba@...nel.org>; Paolo Abeni <pabeni@...hat.com>; Simon Horman <horms@...nel.org>;
> Danielle Ratson <danieller@...dia.com>; Ido Schimmel <idosch@...dia.com>; mlxsw <mlxsw@...dia.com>; Jiri Pirko <jiri@...nulli.us>
> Subject: Re: [PATCH net 3/5] mlxsw: pci: Sync Rx buffers for device
> 
> From: Petr Machata <petrm@...dia.com>
> Date: Fri, 25 Oct 2024 16:26:27 +0200
> 
> > From: Amit Cohen <amcohen@...dia.com>
> >
> > Non-coherent architectures, like ARM, may require invalidating caches
> > before the device can use the DMA mapped memory, which means that before
> > posting pages to device, drivers should sync the memory for device.
> >
> > Sync for device can be configured as page pool responsibility. Set the
> > relevant flag and define max_len for sync.
> >
> > Cc: Jiri Pirko <jiri@...nulli.us>
> > Fixes: b5b60bb491b2 ("mlxsw: pci: Use page pool for Rx buffers allocation")
> > Signed-off-by: Amit Cohen <amcohen@...dia.com>
> > Reviewed-by: Ido Schimmel <idosch@...dia.com>
> > Signed-off-by: Petr Machata <petrm@...dia.com>
> > ---
> >  drivers/net/ethernet/mellanox/mlxsw/pci.c | 3 ++-
> >  1 file changed, 2 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/net/ethernet/mellanox/mlxsw/pci.c b/drivers/net/ethernet/mellanox/mlxsw/pci.c
> > index 2320a5f323b4..d6f37456fb31 100644
> > --- a/drivers/net/ethernet/mellanox/mlxsw/pci.c
> > +++ b/drivers/net/ethernet/mellanox/mlxsw/pci.c
> > @@ -996,12 +996,13 @@ static int mlxsw_pci_cq_page_pool_init(struct mlxsw_pci_queue *q,
> >  	if (cq_type != MLXSW_PCI_CQ_RDQ)
> >  		return 0;
> >
> > -	pp_params.flags = PP_FLAG_DMA_MAP;
> > +	pp_params.flags = PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV;
> >  	pp_params.pool_size = MLXSW_PCI_WQE_COUNT * mlxsw_pci->num_sg_entries;
> >  	pp_params.nid = dev_to_node(&mlxsw_pci->pdev->dev);
> >  	pp_params.dev = &mlxsw_pci->pdev->dev;
> >  	pp_params.napi = &q->u.cq.napi;
> >  	pp_params.dma_dir = DMA_FROM_DEVICE;
> > +	pp_params.max_len = PAGE_SIZE;
> 
> max_len is the maximum HW-writable area of a buffer. Headroom and
> tailroom must be excluded. In your case
> 
> 	pp_params.max_len = PAGE_SIZE - MLXSW_PCI_RX_BUF_SW_OVERHEAD;
> 

mlxsw driver uses fragmented buffers and the page pool is used to allocate the buffers for all scatter/gather entries.
For each packet, the HW-writable area of a buffer of the *first* entry is 'PAGE_SIZE - MLXSW_PCI_RX_BUF_SW_OVERHEAD', but for other entries we map PAGE_SIZE to HW.
That's why we set page pool to sync PAGE_SIZE and use offset=0.

> >
> >  	page_pool = page_pool_create(&pp_params);
> >  	if (IS_ERR(page_pool))
> 
> Thanks,
> Olek

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ