lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4985C9F9.1020103@cosmosbay.com>
Date:	Sun, 01 Feb 2009 17:12:41 +0100
From:	Eric Dumazet <dada1@...mosbay.com>
To:	Evgeniy Polyakov <zbr@...emap.net>
CC:	Stephen Hemminger <shemminger@...tta.com>,
	Herbert Xu <herbert@...dor.apana.org.au>, berrange@...hat.com,
	et-mgmt-tools@...hat.com, davem@...emloft.net,
	netdev@...r.kernel.org
Subject: Re: virt-manager broken by bind(0) in net-next.

Evgeniy Polyakov a écrit :
> Hi Eric.
> 
> On Sat, Jan 31, 2009 at 11:17:15AM +0100, Eric Dumazet (dada1@...mosbay.com) wrote:
>> We only need to know if the *fix* is solving Stephen problem
>>
>> About performance effects of careful variable placement and percpu counter
>> strategy you might consult as an example :
>>
>> http://lkml.indiana.edu/hypermail/linux/kernel/0812.1/01624.html
> 
> Impressive, but to be 100% fair it is not only because of the cache line
> issues :)
> 
>> Now, with these patches applied, try to see effect of your new bsockets field
>> on a network workload doing lot of socket bind()/unbind() calls...
>>
>> With current kernels, you probably wont notice because of inode/dcache hot
>> cache lines, but it might change eventually...
> 
> David applied the patch which fixed the problem, so we can return to the
> cache line issues. What do you think about the last version where
> bsockets field was placed at the very end of the structure and with
> cacheline_aligned_on_smp attribute?
> 

Yes, at a minimum, move it away from first cache line.

And using atomic_t so that we dont have to discuss about accumulated
errors on SMP on this variable. We will see later if percpu counter
is wanted or not.

Thank you

[PATCH] net: move bsockets outside of read only beginning of struct inet_hashinfo

And switch bsockets to atomic_t since it might be changed in parallel.

Signed-off-by: Eric Dumazet <dada1@...mosbay.com>
---
 include/net/inet_hashtables.h   |    3 ++-
 net/ipv4/inet_connection_sock.c |    2 +-
 net/ipv4/inet_hashtables.c      |    5 +++--
 3 files changed, 6 insertions(+), 4 deletions(-)

diff --git a/include/net/inet_hashtables.h b/include/net/inet_hashtables.h
index 8d98dc7..a44e224 100644
--- a/include/net/inet_hashtables.h
+++ b/include/net/inet_hashtables.h
@@ -134,7 +134,7 @@ struct inet_hashinfo {
 	struct inet_bind_hashbucket	*bhash;
 
 	unsigned int			bhash_size;
-	int				bsockets;
+	/* 4 bytes hole on 64 bit */
 
 	struct kmem_cache		*bind_bucket_cachep;
 
@@ -151,6 +151,7 @@ struct inet_hashinfo {
 	struct inet_listen_hashbucket	listening_hash[INET_LHTABLE_SIZE]
 					____cacheline_aligned_in_smp;
 
+	atomic_t			bsockets;
 };
 
 static inline struct inet_ehash_bucket *inet_ehash_bucket(
diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c
index 9bc6a18..22cd19e 100644
--- a/net/ipv4/inet_connection_sock.c
+++ b/net/ipv4/inet_connection_sock.c
@@ -119,7 +119,7 @@ again:
 					    (tb->num_owners < smallest_size || smallest_size == -1)) {
 						smallest_size = tb->num_owners;
 						smallest_rover = rover;
-						if (hashinfo->bsockets > (high - low) + 1) {
+						if (atomic_read(&hashinfo->bsockets) > (high - low) + 1) {
 							spin_unlock(&head->lock);
 							snum = smallest_rover;
 							goto have_snum;
diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c
index d7b6178..625cc5f 100644
--- a/net/ipv4/inet_hashtables.c
+++ b/net/ipv4/inet_hashtables.c
@@ -62,7 +62,7 @@ void inet_bind_hash(struct sock *sk, struct inet_bind_bucket *tb,
 {
 	struct inet_hashinfo *hashinfo = sk->sk_prot->h.hashinfo;
 
-	hashinfo->bsockets++;
+	atomic_inc(&hashinfo->bsockets);
 
 	inet_sk(sk)->num = snum;
 	sk_add_bind_node(sk, &tb->owners);
@@ -81,7 +81,7 @@ static void __inet_put_port(struct sock *sk)
 	struct inet_bind_hashbucket *head = &hashinfo->bhash[bhash];
 	struct inet_bind_bucket *tb;
 
-	hashinfo->bsockets--;
+	atomic_dec(&hashinfo->bsockets);
 
 	spin_lock(&head->lock);
 	tb = inet_csk(sk)->icsk_bind_hash;
@@ -532,6 +532,7 @@ void inet_hashinfo_init(struct inet_hashinfo *h)
 {
 	int i;
 
+	atomic_set(&h->bsockets, 0);
 	for (i = 0; i < INET_LHTABLE_SIZE; i++) {
 		spin_lock_init(&h->listening_hash[i].lock);
 		INIT_HLIST_NULLS_HEAD(&h->listening_hash[i].head,

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ