lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Fri, 19 May 2023 05:19:53 +0000
From: Sunil Kovvuri Goutham <sgoutham@...vell.com>
To: Yunsheng Lin <linyunsheng@...wei.com>,
        Ratheesh Kannoth
	<rkannoth@...vell.com>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
CC: "davem@...emloft.net" <davem@...emloft.net>,
        "edumazet@...gle.com"
	<edumazet@...gle.com>,
        "kuba@...nel.org" <kuba@...nel.org>,
        "pabeni@...hat.com" <pabeni@...hat.com>,
        Subbaraya Sundeep Bhatta
	<sbhatta@...vell.com>,
        Geethasowjanya Akula <gakula@...vell.com>,
        Srujana
 Challa <schalla@...vell.com>,
        Hariprasad Kelam <hkelam@...vell.com>
Subject: RE: [EXT] Re: [PATCH net-next v2] octeontx2-pf: Add support for page
 pool



> -----Original Message-----
> From: Yunsheng Lin <linyunsheng@...wei.com>
> Sent: Friday, May 19, 2023 8:07 AM
> To: Ratheesh Kannoth <rkannoth@...vell.com>; netdev@...r.kernel.org;
> linux-kernel@...r.kernel.org
> Cc: Sunil Kovvuri Goutham <sgoutham@...vell.com>; davem@...emloft.net;
> edumazet@...gle.com; kuba@...nel.org; pabeni@...hat.com; Subbaraya
> Sundeep Bhatta <sbhatta@...vell.com>; Geethasowjanya Akula
> <gakula@...vell.com>; Srujana Challa <schalla@...vell.com>; Hariprasad
> Kelam <hkelam@...vell.com>
> Subject: [EXT] Re: [PATCH net-next v2] octeontx2-pf: Add support for page pool
> 
> External Email
> 
> ----------------------------------------------------------------------
> On 2023/5/19 9:52, Ratheesh Kannoth wrote:
> >> ---------------------------------------------------------------------
> >> - On 2023/5/18 13:51, Ratheesh Kannoth wrote:
> >>> Page pool for each rx queue enhance rx side performance by
> >>> reclaiming buffers back to each queue specific pool. DMA mapping is
> >>> done only for first allocation of buffers.
> >>> As subsequent buffers allocation avoid DMA mapping, it results in
> >>> performance improvement.
> >>>
> >>> Image        |  Performance with Linux kernel Packet Generator
> >>
> >> Is there any more detailed info for the performance data?
> >> 'kernel Packet Generator' means using pktgen module in the
> >> net/core/pktgen.c? it seems pktgen is more for tx, is there any
> >> abvious reason why the page pool optimization for rx have brought
> >> about ten times improvement?
> > We used packet generator for TX machine.  Performance data is for RX
> > DUT.  I will remove Packet generator text from the commit message as it gives
> ambiguous information
> > DUT  Rx     <-------------------------     TX  (Linux machine with packet generator)
> >  (page pool support)
> 
> Thanks for clarifying.
> DUT is for 'Device Under Test'?
> what does DUT do after it receive a packet? XDP DROP?
> 
> >
> >>
> >>> ------------ | -----------------------------------------------
> >>> Vannila      |   3Mpps
> >>>              |
> >>> with this    |   42Mpps
> >>> change	     |
> >>> -------------------------------------------------------------
> >>>
> >>
> >> ...
> >>
> >>>  static int __otx2_alloc_rbuf(struct otx2_nic *pfvf, struct otx2_pool *pool,
> >>>  			     dma_addr_t *dma)
> >>>  {
> >>>  	u8 *buf;
> >>>
> >>> +	if (pool->page_pool)
> >>> +		return otx2_alloc_pool_buf(pfvf, pool, dma);
> >>> +
> >>>  	buf = napi_alloc_frag_align(pool->rbsize, OTX2_ALIGN);
> >>>  	if (unlikely(!buf))
> >>>  		return -ENOMEM;
> >>
> >> It seems the above is dead code when using 'select PAGE_POOL', as
> >> PAGE_POOL config is always selected by the driver?
> > _otx2_alloc_rbuf() is common code for RX and TX.  For RX,  pool->page_pool
> != NULL, so allocation is from page pool.
> >
> 
> Am I missing something here? 'buf' is dma-mapped with DMA_FROM_DEVICE,
> can it be used for TX?
> 
> Also, what does 'r' in _otx2_alloc_rbuf() mean?
> 

HW takes care of cache coherency between device and CPU, hence DMA_ATTR_SKIP_CPU_SYNC
was used. Direction of DMA doesn't matter here. Hence instead of duplicating the same API
' otx2_alloc_rbuf' was used for both Rx and Tx. 'r' stands for receive.

Thanks,
Sunil.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ