lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [day] [month] [year] [list]
Date:   Mon, 29 Aug 2022 13:44:35 +0200
From:   Richard Gobert <richardbgobert@...il.com>
To:     davem@...emloft.net, edumazet@...gle.com, kuba@...nel.org,
        pabeni@...hat.com, corbet@....net, yoshfuji@...ux-ipv6.org,
        dsahern@...nel.org, alex.aring@...il.com,
        stefan@...enfreihafen.org, pablo@...filter.org,
        kadlec@...filter.org, fw@...len.de, kafai@...com,
        netdev@...r.kernel.org, linux-doc@...r.kernel.org,
        linux-kernel@...r.kernel.org, linux-wpan@...r.kernel.org,
        netfilter-devel@...r.kernel.org, coreteam@...filter.org
Subject: [PATCH 0/4] net-next: frags: add adaptive per-peer timeout under load

This patch series introduces an optimization of fragment queues under
load.

The goal is to improve upon the current approach of static timeouts
(frag_timeout, 30 seconds by default) by implementing Eric's
suggestion of reducing timeouts under load [1], with additional
considerations for peer-specific load.

The timeout reduction is done dynamically per peer, based on both
global and peer-specific load. low_thresh is reintroduced and now acts 
as a knob for adjusting per-peer memory limits.

A comparison of netperf results before and after applying the patch:

Before:
    [vm1 ~]# ./super_netperf.sh 10 -H 172.16.43.3 -l 60 -t UDP_STREAM
    103.23

After:
    [vm1 ~]# ./super_netperf.sh 10 -H 172.16.43.3 -l 60 -t UDP_STREAM
    576.17

And another benchmark of a more specific use case.
One high-bandwidth memory-hogging peer (vm1), and another "average"
client (vm2), attempting to communicate with the same server:

Before:
    [vm1 ~]# ./super_netperf.sh 10 -H 172.16.43.3 -l 60 -t UDP_STREAM
	42.57
	[vm2 ~]# ./super_netperf.sh 1 -H 172.16.43.3 -l 60 -t UDP_STREAM
	50.93

After:
    [vm1 ~]# ./super_netperf.sh 10 -H 172.16.43.3 -l 60 -t UDP_STREAM
	420.65
	[vm2 ~]# ./super_netperf.sh 1 -H 172.16.43.3 -l 60 -t UDP_STREAM
	624.79


These benchmarks were done using the following configuration:

[vm3 ~]# grep . /proc/sys/net/ipv4/ipfrag_*
/proc/sys/net/ipv4/ipfrag_high_thresh:104857600
/proc/sys/net/ipv4/ipfrag_low_thresh:78643200
/proc/sys/net/ipv4/ipfrag_max_dist:64
/proc/sys/net/ipv4/ipfrag_secret_interval:0
/proc/sys/net/ipv4/ipfrag_time:30

Regards,
Richard

[1] https://www.mail-archive.com/netdev@vger.kernel.org/msg242228.html

Richard Gobert (4):
  net-next: frags: move inetpeer from ip4 to inet
  net-next: ip6: fetch inetpeer in ip6frag_init
  net-next: frags: add inetpeer frag_mem tracking
  net-next: frags: dynamic timeout under load

 Documentation/networking/ip-sysctl.rst  |  3 +
 include/net/inet_frag.h                 | 13 ++---
 include/net/inetpeer.h                  |  1 +
 include/net/ipv6_frag.h                 |  3 +
 net/ieee802154/6lowpan/reassembly.c     |  2 +-
 net/ipv4/inet_fragment.c                | 77 ++++++++++++++++++++++---
 net/ipv4/inetpeer.c                     |  1 +
 net/ipv4/ip_fragment.c                  | 25 ++------
 net/ipv6/netfilter/nf_conntrack_reasm.c |  2 +-
 net/ipv6/reassembly.c                   |  2 +-
 10 files changed, 89 insertions(+), 40 deletions(-)

-- 
2.36.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ