lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: 
 <MWHPR1801MB191809E0C74271333E96C343D37C9@MWHPR1801MB1918.namprd18.prod.outlook.com>
Date: Fri, 19 May 2023 01:52:18 +0000
From: Ratheesh Kannoth <rkannoth@...vell.com>
To: Yunsheng Lin <linyunsheng@...wei.com>,
        "netdev@...r.kernel.org"
	<netdev@...r.kernel.org>,
        "linux-kernel@...r.kernel.org"
	<linux-kernel@...r.kernel.org>
CC: Sunil Kovvuri Goutham <sgoutham@...vell.com>,
        "davem@...emloft.net"
	<davem@...emloft.net>,
        "edumazet@...gle.com" <edumazet@...gle.com>,
        "kuba@...nel.org" <kuba@...nel.org>,
        "pabeni@...hat.com" <pabeni@...hat.com>,
        Subbaraya Sundeep Bhatta <sbhatta@...vell.com>,
        Geethasowjanya Akula
	<gakula@...vell.com>,
        Srujana Challa <schalla@...vell.com>,
        Hariprasad Kelam
	<hkelam@...vell.com>
Subject: RE:  Re: [PATCH net-next v2] octeontx2-pf: Add support for page pool


> -----Original Message-----
> From: Yunsheng Lin <linyunsheng@...wei.com>
> Sent: Friday, May 19, 2023 7:12 AM
> To: Ratheesh Kannoth <rkannoth@...vell.com>; netdev@...r.kernel.org;
> linux-kernel@...r.kernel.org
> Cc: Sunil Kovvuri Goutham <sgoutham@...vell.com>;
> davem@...emloft.net; edumazet@...gle.com; kuba@...nel.org;
> pabeni@...hat.com; Subbaraya Sundeep Bhatta <sbhatta@...vell.com>;
> Geethasowjanya Akula <gakula@...vell.com>; Srujana Challa
> <schalla@...vell.com>; Hariprasad Kelam <hkelam@...vell.com>
> Subject: [EXT] Re: [PATCH net-next v2] octeontx2-pf: Add support for page
> pool
> 
> External Email
> 
> ----------------------------------------------------------------------
> On 2023/5/18 13:51, Ratheesh Kannoth wrote:
> > Page pool for each rx queue enhance rx side performance by reclaiming
> > buffers back to each queue specific pool. DMA mapping is done only for
> > first allocation of buffers.
> > As subsequent buffers allocation avoid DMA mapping, it results in
> > performance improvement.
> >
> > Image        |  Performance with Linux kernel Packet Generator
> 
> Is there any more detailed info for the performance data?
> 'kernel Packet Generator' means using pktgen module in the
> net/core/pktgen.c? it seems pktgen is more for tx, is there any abvious
> reason why the page pool optimization for rx have brought about ten times
> improvement?
We used packet generator for TX machine.  Performance data is for RX DUT.  I will remove 
Packet generator text from the commit message as it gives ambiguous information
DUT  Rx     <-------------------------     TX  (Linux machine with packet generator)
 (page pool support) 

> 
> > ------------ | -----------------------------------------------
> > Vannila      |   3Mpps
> >              |
> > with this    |   42Mpps
> > change	     |
> > -------------------------------------------------------------
> >
> 
> ...
> 
> >  static int __otx2_alloc_rbuf(struct otx2_nic *pfvf, struct otx2_pool *pool,
> >  			     dma_addr_t *dma)
> >  {
> >  	u8 *buf;
> >
> > +	if (pool->page_pool)
> > +		return otx2_alloc_pool_buf(pfvf, pool, dma);
> > +
> >  	buf = napi_alloc_frag_align(pool->rbsize, OTX2_ALIGN);
> >  	if (unlikely(!buf))
> >  		return -ENOMEM;
> 
> It seems the above is dead code when using 'select PAGE_POOL', as
> PAGE_POOL config is always selected by the driver?
_otx2_alloc_rbuf() is common code for RX and TX.  For RX,  pool->page_pool != NULL, so allocation is from page pool.


> > @@ -1205,10 +1226,28 @@ void otx2_sq_free_sqbs(struct otx2_nic *pfvf)
> >  	}
> >  }
> >
> 
> ...
> 
> > @@ -1659,7 +1715,6 @@ int otx2_nix_config_bp(struct otx2_nic *pfvf,
> bool enable)
> >  	req->bpid_per_chan = 0;
> >  #endif
> >
> > -
> 
> Nit: unrelated change here.
Sorry, This caused due to vim script;  will remove it.  

> >  	return otx2_sync_mbox_msg(&pfvf->mbox);  }

Powered by blists - more mailing lists