lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 31 Mar 2022 14:59:09 +0000
From:   Vincent Pelletier <plr.vincent@...il.com>
To:     Pablo Neira Ayuso <pablo@...filter.org>
Cc:     netfilter-devel@...r.kernel.org, davem@...emloft.net,
        netdev@...r.kernel.org, kuba@...nel.org,
        Florian Westphal <fw@...len.de>
Subject: Re: [PATCH net 2/5] netfilter: conntrack: sanitize table size
 default settings

Hello,

On Fri,  3 Sep 2021 18:30:17 +0200, Pablo Neira Ayuso <pablo@...filter.org> wrote:
> From: Florian Westphal <fw@...len.de>
> 
> conntrack has two distinct table size settings:
> nf_conntrack_max and nf_conntrack_buckets.
> 
> The former limits how many conntrack objects are allowed to exist
> in each namespace.
> 
> The second sets the size of the hashtable.
> 
> As all entries are inserted twice (once for original direction, once for
> reply), there should be at least twice as many buckets in the table than
> the maximum number of conntrack objects that can exist at the same time.
> 
> Change the default multiplier to 1 and increase the chosen bucket sizes.
> This results in the same nf_conntrack_max settings as before but reduces
> the average bucket list length.
[...]
>  		nf_conntrack_htable_size
>  			= (((nr_pages << PAGE_SHIFT) / 16384)
>  			   / sizeof(struct hlist_head));
> -		if (nr_pages > (4 * (1024 * 1024 * 1024 / PAGE_SIZE)))
> -			nf_conntrack_htable_size = 65536;
> +		if (BITS_PER_LONG >= 64 &&
> +		    nr_pages > (4 * (1024 * 1024 * 1024 / PAGE_SIZE)))
> +			nf_conntrack_htable_size = 262144;
>  		else if (nr_pages > (1024 * 1024 * 1024 / PAGE_SIZE))
> -			nf_conntrack_htable_size = 16384;
[...]
> +			nf_conntrack_htable_size = 65536;

With this formula, there seems to be a discontinuity between the
proportional and fixed regimes:
64bits: 4GB/16k/8 = 32k, which gets bumped to 256k
32bits: 1GB/16k/4 = 16k, which gets bumped to 64k

Is this intentional ?

The background for my interest in this formula comes from OpenWRT:
low-RAM devices intended to handle a lot of connections, which led
OpenWRT to use sysctl to increase the maximum number of entries in this
hash table compared to what this formula produces.
Unfortunately, the result is that not-so-low-RAM devices running
OpenWRT get the same limit as low-RAM devices, so I am trying to tweak
the divisor in the first expression and getting rid of the sysctl call.
But then I am failing to see how I should adapt the expressions in
these "if"s blocks.

If they were maximum sizes (say, something like
nf_conntrack_htable_size = max(nf_conntrack_htable_size, 256k)), I
would understand, but I find this discontinuity surprising.

Am I missing something ?

For reference, this change is
  commit d532bcd0b2699d84d71a0c71d37157ac6eb3be25
in Linus' tree.

Regards,
-- 
Vincent Pelletier
GPG fingerprint 983A E8B7 3B91 1598 7A92 3845 CAC9 3691 4257 B0C1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ