lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZGQJKRfuf4+av/MD@lore-desk>
Date: Wed, 17 May 2023 00:52:25 +0200
From: Lorenzo Bianconi <lorenzo@...nel.org>
To: Maciej Fijalkowski <maciej.fijalkowski@...el.com>
Cc: Lorenzo Bianconi <lorenzo.bianconi@...hat.com>,
	Yunsheng Lin <linyunsheng@...wei.com>, netdev@...r.kernel.org,
	bpf@...r.kernel.org, davem@...emloft.net, edumazet@...gle.com,
	kuba@...nel.org, pabeni@...hat.com, ast@...nel.org,
	daniel@...earbox.net, hawk@...nel.org, john.fastabend@...il.com
Subject: Re: [RFC net-next] net: veth: reduce page_pool memory footprint
 using half page per-buffer

> On Mon, May 15, 2023 at 01:24:20PM +0200, Lorenzo Bianconi wrote:
> > > On 2023/5/12 21:08, Lorenzo Bianconi wrote:
> > > > In order to reduce page_pool memory footprint, rely on
> > > > page_pool_dev_alloc_frag routine and reduce buffer size
> > > > (VETH_PAGE_POOL_FRAG_SIZE) to PAGE_SIZE / 2 in order to consume one page
> > > 
> > > Is there any performance improvement beside the memory saving? As it
> > > should reduce TLB miss, I wonder if the TLB miss reducing can even
> > > out the cost of the extra frag reference count handling for the
> > > frag support?
> > 
> > reducing the requested headroom to 192 (from 256) we have a nice improvement in
> > the 1500B frame case while it is mostly the same in the case of paged skb
> > (e.g. MTU 8000B).
> 
> Can you define 'nice improvement' ? ;)
> Show us numbers or improvement in %.

I am testing this RFC patch in the scenario reported below:

iperf tcp tx --> veth0 --> veth1 (xdp_pass) --> iperf tcp rx

- 6.4.0-rc1 net-next:
  MTU 1500B: ~ 7.07 Gbps
  MTU 8000B: ~ 14.7 Gbps

- 6.4.0-rc1 net-next + page_pool frag support in veth:
  MTU 1500B: ~ 8.57 Gbps
  MTU 8000B: ~ 14.5 Gbps

side note: it seems there is a regression between 6.2.15 and 6.4.0-rc1 net-next
(even without latest veth page_pool patches) in the throughput I can get in the
scenario above, but I have not looked into it yet.

- 6.2.15:
  MTU 1500B: ~ 7.91 Gbps
  MTU 8000B: ~ 14.1 Gbps

- 6.4.0-rc1 net-next w/o commits [0],[1],[2]
  MTU 1500B: ~ 6.38 Gbps
  MTU 8000B: ~ 13.2 Gbps

Regards,
Lorenzo

[0] 0ebab78cbcbf  net: veth: add page_pool for page recycling
[1] 4fc418053ec7  net: veth: add page_pool stats
[2] 9d142ed484a3  net: veth: rely on napi_build_skb in veth_convert_skb_to_xdp_buff

> 
> > 
> > > 
> > > > for two 1500B frames. Reduce VETH_XDP_PACKET_HEADROOM to 192 from 256
> > > > (XDP_PACKET_HEADROOM) to fit max_head_size in VETH_PAGE_POOL_FRAG_SIZE.
> > > > Please note, using default values (CONFIG_MAX_SKB_FRAGS=17), maximum
> > > > supported MTU is now reduced to 36350B.
> > > 
> > > Maybe we don't need to limit the frag size to VETH_PAGE_POOL_FRAG_SIZE,
> > > and use different frag size depending on the mtu or packet size?
> > > 
> > > Perhaps the page_pool_dev_alloc_frag() can be improved to return non-frag
> > > page if the requested frag size is larger than a specified size too.
> > > I will try to implement it if the above idea makes sense.
> > > 
> > 
> > since there are no significant differences between full page and fragmented page
> > implementation if the MTU is over the page boundary, does it worth to do so?
> > (at least for the veth use-case).
> > 
> > Regards,
> > Lorenzo
> > 
> 
> 

Download attachment "signature.asc" of type "application/pgp-signature" (229 bytes)

Powered by blists - more mailing lists