[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8c70c1f8-68df-a9cb-9bba-f26edaebd4a6@gmail.com>
Date: Wed, 7 Sep 2022 16:45:17 -0700
From: Eric Dumazet <eric.dumazet@...il.com>
To: Kuniyuki Iwashima <kuniyu@...zon.com>,
"David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>
Cc: Kuniyuki Iwashima <kuni1840@...il.com>, netdev@...r.kernel.org
Subject: Re: [PATCH v5 net-next 6/6] tcp: Introduce optional per-netns ehash.
On 9/6/22 17:55, Kuniyuki Iwashima wrote:
> The more sockets we have in the hash table, the longer we spend looking
> up the socket. While running a number of small workloads on the same
> host, they penalise each other and cause performance degradation.
>
>
> +
> +struct inet_hashinfo *inet_pernet_hashinfo_alloc(struct inet_hashinfo *hashinfo,
> + unsigned int ehash_entries)
> +{
> + struct inet_hashinfo *new_hashinfo;
> + int i;
> +
> + new_hashinfo = kmalloc(sizeof(*new_hashinfo), GFP_KERNEL);
You probably could use kmemdup(hashinfo, sizeof(*hashinfo), GFP_KERNEL);
> + if (!new_hashinfo)
> + goto err;
> +
> + new_hashinfo->ehash = kvmalloc_array(ehash_entries,
> + sizeof(struct inet_ehash_bucket),
> + GFP_KERNEL_ACCOUNT);
> + if (!new_hashinfo->ehash)
> + goto free_hashinfo;
> +
> + new_hashinfo->ehash_mask = ehash_entries - 1;
> +
> + if (inet_ehash_locks_alloc(new_hashinfo))
> + goto free_ehash;
> +
> + for (i = 0; i < ehash_entries; i++)
> + INIT_HLIST_NULLS_HEAD(&new_hashinfo->ehash[i].chain, i);
> +
> + new_hashinfo->bind_bucket_cachep = hashinfo->bind_bucket_cachep;
> + new_hashinfo->bhash = hashinfo->bhash;
> + new_hashinfo->bind2_bucket_cachep = hashinfo->bind2_bucket_cachep;
> + new_hashinfo->bhash2 = hashinfo->bhash2;
> + new_hashinfo->bhash_size = hashinfo->bhash_size;
> +
> + new_hashinfo->lhash2_mask = hashinfo->lhash2_mask;
> + new_hashinfo->lhash2 = hashinfo->lhash2;
This would avoid copying all these @hashinfo fields.
Powered by blists - more mailing lists