[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5beb631cf0dcc03d5afad3a29671677bdbc7b931.camel@linux.intel.com>
Date: Mon, 01 Apr 2019 08:40:47 -0700
From: Alexander Duyck <alexander.h.duyck@...ux.intel.com>
To: Dmitry Safonov <dima@...sta.com>, linux-kernel@...r.kernel.org
Cc: Alexey Kuznetsov <kuznet@....inr.ac.ru>,
David Ahern <dsahern@...il.com>,
"David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Hideaki YOSHIFUJI <yoshfuji@...ux-ipv6.org>,
Ido Schimmel <idosch@...lanox.com>, netdev@...r.kernel.org
Subject: Re: [RFC 1/4] net/ipv4/fib: Remove run-time check in tnode_alloc()
On Tue, 2019-03-26 at 15:30 +0000, Dmitry Safonov wrote:
> TNODE_KMALLOC_MAX is not used anywhere, while TNODE_VMALLOC_MAX check in
> tnode_alloc() only adds additional cmp/jmp instructions to tnode
> allocation. During rebalancing of the trie the function can be called
> thousands of times. Runtime check takes cache line and predictor entry.
> Futhermore, this check is always false on 64-bit platfroms and ipv4 has
> only 4 byte address range and bits are limited by KEYLENGTH (32).
>
> Move the check under unlikely() and change comparison to BITS_PER_LONG,
> optimizing allocation of tnode during rebalancing (and removing it
> complitely for platforms with BITS_PER_LONG > KEYLENGTH).
>
> Signed-off-by: Dmitry Safonov <dima@...sta.com>
> ---
> net/ipv4/fib_trie.c | 8 +-------
> 1 file changed, 1 insertion(+), 7 deletions(-)
>
> diff --git a/net/ipv4/fib_trie.c b/net/ipv4/fib_trie.c
> index a573e37e0615..ad7d56c421cb 100644
> --- a/net/ipv4/fib_trie.c
> +++ b/net/ipv4/fib_trie.c
> @@ -312,11 +312,6 @@ static inline void alias_free_mem_rcu(struct fib_alias *fa)
> call_rcu(&fa->rcu, __alias_free_mem);
> }
>
> -#define TNODE_KMALLOC_MAX \
> - ilog2((PAGE_SIZE - TNODE_SIZE(0)) / sizeof(struct key_vector *))
> -#define TNODE_VMALLOC_MAX \
> - ilog2((SIZE_MAX - TNODE_SIZE(0)) / sizeof(struct key_vector *))
> -
> static void __node_free_rcu(struct rcu_head *head)
> {
> struct tnode *n = container_of(head, struct tnode, rcu);
> @@ -333,8 +328,7 @@ static struct tnode *tnode_alloc(int bits)
> {
> size_t size;
>
> - /* verify bits is within bounds */
> - if (bits > TNODE_VMALLOC_MAX)
> + if ((BITS_PER_LONG <= KEYLENGTH) && unlikely(bits >= BITS_PER_LONG))
> return NULL;
>
> /* determine size and verify it is non-zero and didn't overflow */
I think it would be better if we left TNODE_VMALLOC_MAX instead of
replacing it with BITS_PER_LONG. This way we know that we are limited
by the size of the node on 32b systems, and by the KEYLENGTH on 64b
systems. The basic idea is to maintain the logic as to why we are doing
it this way instead of just burying things by using built in constants
that are close enough to work.
So for example I believe TNODE_VMALLOC_MAX is 31 on a 32b system. The
main reason for that is because we have to subtract the TNODE_SIZE from
the upper limit for size. By replacing TNODE_VMALLOC_MAX with
BITS_PER_LONG that becomes abstracted away and it becomes more likely
that somebody will mishandle it later.
- Alex
Powered by blists - more mailing lists