lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID:
 <BL1PR12MB592279200E7D9D923F989E31CB492@BL1PR12MB5922.namprd12.prod.outlook.com>
Date: Sun, 27 Oct 2024 07:29:48 +0000
From: Amit Cohen <amcohen@...dia.com>
To: Alexander Lobakin <aleksander.lobakin@...el.com>, Petr Machata
	<petrm@...dia.com>
CC: "netdev@...r.kernel.org" <netdev@...r.kernel.org>, Andrew Lunn
	<andrew+netdev@...n.ch>, "David S. Miller" <davem@...emloft.net>, Eric
 Dumazet <edumazet@...gle.com>, Jakub Kicinski <kuba@...nel.org>, Paolo Abeni
	<pabeni@...hat.com>, Simon Horman <horms@...nel.org>, Danielle Ratson
	<danieller@...dia.com>, Ido Schimmel <idosch@...dia.com>, mlxsw
	<mlxsw@...dia.com>, Jiri Pirko <jiri@...nulli.us>
Subject: RE: [PATCH net 2/5] mlxsw: pci: Sync Rx buffers for CPU



> -----Original Message-----
> From: Alexander Lobakin <aleksander.lobakin@...el.com>
> Sent: Friday, 25 October 2024 18:00
> To: Petr Machata <petrm@...dia.com>; Amit Cohen <amcohen@...dia.com>
> Cc: netdev@...r.kernel.org; Andrew Lunn <andrew+netdev@...n.ch>; David S. Miller <davem@...emloft.net>; Eric Dumazet
> <edumazet@...gle.com>; Jakub Kicinski <kuba@...nel.org>; Paolo Abeni <pabeni@...hat.com>; Simon Horman <horms@...nel.org>;
> Danielle Ratson <danieller@...dia.com>; Ido Schimmel <idosch@...dia.com>; mlxsw <mlxsw@...dia.com>; Jiri Pirko <jiri@...nulli.us>
> Subject: Re: [PATCH net 2/5] mlxsw: pci: Sync Rx buffers for CPU
> 
> From: Petr Machata <petrm@...dia.com>
> Date: Fri, 25 Oct 2024 16:26:26 +0200
> 
> > From: Amit Cohen <amcohen@...dia.com>
> >
> > When Rx packet is received, drivers should sync the pages for CPU, to
> > ensure the CPU reads the data written by the device and not stale
> > data from its cache.
> 
> [...]
> 
> > -static struct sk_buff *mlxsw_pci_rdq_build_skb(struct page *pages[],
> > +static struct sk_buff *mlxsw_pci_rdq_build_skb(struct mlxsw_pci_queue *q,
> > +					       struct page *pages[],
> >  					       u16 byte_count)
> >  {
> > +	struct mlxsw_pci_queue *cq = q->u.rdq.cq;
> >  	unsigned int linear_data_size;
> > +	struct page_pool *page_pool;
> >  	struct sk_buff *skb;
> >  	int page_index = 0;
> >  	bool linear_only;
> >  	void *data;
> >
> > +	linear_only = byte_count + MLXSW_PCI_RX_BUF_SW_OVERHEAD <= PAGE_SIZE;
> > +	linear_data_size = linear_only ? byte_count :
> > +					 PAGE_SIZE -
> > +					 MLXSW_PCI_RX_BUF_SW_OVERHEAD;
> 
> Maybe reformat the line while at it?
> 
> 	linear_data_size = linear_only ? byte_count :
> 			   PAGE_SIZE - MLXSW_PCI_RX_BUF_SW_OVERHEAD;
> 
> > +
> > +	page_pool = cq->u.cq.page_pool;
> > +	page_pool_dma_sync_for_cpu(page_pool, pages[page_index],
> > +				   MLXSW_PCI_SKB_HEADROOM, linear_data_size);
> 
> page_pool_dma_sync_for_cpu() already skips the headroom:
> 
> 	dma_sync_single_range_for_cpu(pool->p.dev,
> 				      offset + pool->p.offset, ...
> 
> Since your pool->p.offset is MLXSW_PCI_SKB_HEADROOM, I believe you need
> to pass 0 here.

Our pool->p.offset is zero.
We use the page pool to allocate buffers for scatter/gather entries.
Only the first entry saves headroom for software usage, so only for the first buffer of the packet we pass headroom to page_pool_dma_sync_for_cpu(). 

> 
> > +
> >  	data = page_address(pages[page_index]);
> >  	net_prefetch(data);
> 
> Thanks,
> Olek

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ