lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZbuFX2TQUQBovDy2@lore-desk>
Date: Thu, 1 Feb 2024 12:49:51 +0100
From: Lorenzo Bianconi <lorenzo@...nel.org>
To: Lorenzo Bianconi <lorenzo.bianconi@...hat.com>
Cc: Yunsheng Lin <linyunsheng@...wei.com>, netdev@...r.kernel.org,
	davem@...emloft.net, kuba@...nel.org, edumazet@...gle.com,
	pabeni@...hat.com, bpf@...r.kernel.org, toke@...hat.com,
	willemdebruijn.kernel@...il.com, jasowang@...hat.com,
	sdf@...gle.com, hawk@...nel.org, ilias.apalodimas@...aro.org
Subject: Re: [PATCH v6 net-next 1/5] net: add generic per-cpu page_pool
 allocator

> > On 2024/1/29 21:04, Lorenzo Bianconi wrote:
> > >> On 2024/1/28 22:20, Lorenzo Bianconi wrote:
> > >>
> > >>>  #ifdef CONFIG_LOCKDEP
> > >>>  /*
> > >>>   * register_netdevice() inits txq->_xmit_lock and sets lockdep class
> > >>> @@ -11686,6 +11690,27 @@ static void __init net_dev_struct_check(void)
> > >>>   *
> > >>>   */
> > >>>  
> > >>> +#define SD_PAGE_POOL_RING_SIZE	256
> > >>
> > >> I might missed that if there is a reason we choose 256 here, do we
> > >> need to use different value for differe page size, for 64K page size,
> > >> it means we might need to reserve 16MB memory for each CPU.
> > > 
> > > honestly I have not spent time on it, most of the current page_pool users set
> > > pool_size to 256. Anyway, do you mean something like:
> > > 
> > > diff --git a/net/core/dev.c b/net/core/dev.c
> > > index f70fb6cad2b2..3934a3fc5c45 100644
> > > --- a/net/core/dev.c
> > > +++ b/net/core/dev.c
> > > @@ -11806,12 +11806,11 @@ static void __init net_dev_struct_check(void)
> > >   *
> > >   */
> > >  
> > > -#define SD_PAGE_POOL_RING_SIZE	256
> > >  static int net_page_pool_alloc(int cpuid)
> > >  {
> > >  #if IS_ENABLED(CONFIG_PAGE_POOL)
> > 
> > Isn't better to have a config like CONFIG_PER_CPU_PAGE_POOL to enable
> > this feature? and this config can be selected by whoever needs this
> > feature?
> 
> since it will be used for generic xdp (at least) I think this will be 99%
> enabled when we have bpf enabled, right?
> 
> > 
> > >  	struct page_pool_params page_pool_params = {
> > > -		.pool_size = SD_PAGE_POOL_RING_SIZE,
> > > +		.pool_size = PAGE_SIZE < SZ_64K ? 256 : 16,
> > 
> > What about other page size? like 16KB?
> > How about something like below:
> > PAGE_SIZE << get_order(PER_CPU_PAGE_POOL_MAX_SIZE)
> 
> since pool_size is the number of elements in the ptr_ring associated to the pool,
> assuming we want to consume PER_CPU_PAGE_POOL_MAX_SIZE for each cpu, something
> like:
> 
> PER_CPU_PAGE_POOL_MAX_SIZE / PAGE_SIZE
> 
> Regards,
> Lorenzo

Discussing with Jesper and Toke, we agreed page_pool infrastructure will need
a way to release memory when the system is under memory pressure, so we can
defer this item to a subsequent series, what do you think?

Regards,
Lorenzo

> 
> > 
> > >  		.nid = NUMA_NO_NODE,
> > >  	};
> > >  	struct page_pool *pp_ptr;
> > 



Download attachment "signature.asc" of type "application/pgp-signature" (229 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ