lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4978EE03.9040207@cosmosbay.com>
Date:	Thu, 22 Jan 2009 23:06:59 +0100
From:	Eric Dumazet <dada1@...mosbay.com>
To:	Vitaly Mayatskikh <v.mayatskih@...il.com>
CC:	David Miller <davem@...emloft.net>, netdev@...r.kernel.org
Subject: Re: speed regression in udp_lib_lport_inuse()

Vitaly Mayatskikh a écrit :
> Hello!
> 
> I found your latest patches w.r.t. udp port randomization really solve
> the "finding shortest chain kills randomness" problem, but
> significantly slow down system in the case when almost every port is
> in use. Kernel spends too much time trying to find free port number.
> 
> Try to compile and run this reproducer (after increasing open files
> limit).
> 
> #include <stdio.h>
> #include <stdlib.h>
> #include <errno.h>
> #include <string.h>
> #include <sys/types.h>
> #include <sys/socket.h>
> #include <netinet/in.h>
> #include <pthread.h>
> #include <assert.h>
> 
> #define PORTS 65536
> #define NP 64
> #define THREADS
> 
> void* foo(void* arg)
> {
> 	int s, err, i, j;
> 	struct sockaddr_in sa;
> 	int optval = 1, port;
> 	unsigned int p[PORTS] = { 0 };
> 
> 	for (i = 0; i < PORTS * 100; ++i) {
> 		s = socket(PF_INET, SOCK_DGRAM, IPPROTO_UDP);
> 		assert(s > 0);
> 		memset(&sa, 0, sizeof(sa));
> 		sa.sin_addr.s_addr = htonl(INADDR_ANY);
> 		sa.sin_family = AF_INET;
> 		sa.sin_port = 0;
> 		err = bind(s, (const struct sockaddr*)&sa, sizeof(sa));

Bug here, if bind() returns -1 (all ports are in use)

> 
> 		getsockname(s, (struct sockaddr*)&sa, &j);
> 		port = ntohs(sa.sin_port);
> 		p[port] = s;
> // free some ports
> 		if (p[port + 1]) {
> 			close(p[port + 1]);
> 			p[port + 1] = 0;
> 		}
> 		if (p[port - 1]) {
> 			close(p[port - 1]);
> 			p[port - 1] = 0;
> 		}
> 	}
> }
> 
> int main()
> {
> 	int i, err;
> #ifdef THREADS
> 	pthread_t t[NP];
> 
> 	for (i = 0; i < NP; ++i)
> 	{
> 		err = pthread_create(&t[i], NULL, foo, NULL);
> 		assert(err == 0);
> 	}
> 	for (i = 0; i < NP; ++i)
> 	{
> 		err = pthread_join(t[i], NULL);
> 		assert(err == 0);
> 	}
> #else
> 	for (i = 0; i < NP; ++i) {
> 		err = fork();
> 		if (err == 0)
> 			foo(NULL);
> 	}
> #endif
> }
> 
> I ran glxgears and had these numbers:
> 
> $ glxgears 
> 3297 frames in 5.0 seconds = 659.283 FPS
> 3680 frames in 5.0 seconds = 735.847 FPS
> 3840 frames in 5.0 seconds = 767.891 FPS
> 3574 frames in 5.0 seconds = 714.704 FPS
> -> here I ran reproducer
> 2507 frames in 5.1 seconds = 493.173 FPS
> 56 frames in 7.7 seconds =  7.316 FPS
> 14 frames in 5.1 seconds =  2.752 FPS
> 1 frames in 6.8 seconds =  0.146 FPS
> 9 frames in 7.6 seconds =  1.188 FPS
> 1 frames in 9.3 seconds =  0.108 FPS
> 12 frames in 5.5 seconds =  2.187 FPS
> 30 frames in 9.0 seconds =  3.338 FPS
> 25 frames in 5.1 seconds =  4.888 FPS
> <- here I killed reproducer
> 1034 frames in 5.0 seconds = 206.764 FPS
> 3728 frames in 5.0 seconds = 745.541 FPS
> 3668 frames in 5.0 seconds = 733.496 FPS
> 
> Last stable kernel survives it more or less smoothly.
> 
> Thanks!

Hello Vitaly, thanks for this excellent report.

Yes, current code is really not good when all ports are in use :

We now have to scan 28232 [1] times long chains of 220 sockets.
Thats very long (but at least thread is preemptable)

In the past (before patches), only one thread was allowed to run in kernel while scanning
udp port table (we had only one global lock udp_hash_lock protecting the whole udp table).
This thread was faster because it was not slowed down by other threads.
(But the rwlock we used was responsible for starvations of writers if many UDP frames
were received)



One way to solve the problem could be to use following :

1) Raising UDP_HTABLE_SIZE from 128 to 1024 to reduce average chain lengths.

2) In bind(0) algo, use rcu locking to find a possible usable port. All cpus can run in //, without
dirtying locks. Then lock the found chain and recheck port is available before using it.

[1] replace 28232 by your actual /proc/sys/net/ipv4/ip_local_port_range values
61000 - 32768 = 28232

I will try to code a patch before this week end.

Thanks

Note : I tried to use a mutex to force only one thread in bind(0) code but got no real speedup.
But it should help if you have a SMP machine, since only one cpu will be busy in bind(0)


diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
index cf5ab05..a572407 100644
--- a/net/ipv4/udp.c
+++ b/net/ipv4/udp.c
@@ -155,6 +155,8 @@ int udp_lib_get_port(struct sock *sk, unsigned short snum,
 	struct udp_hslot *hslot;
 	struct udp_table *udptable = sk->sk_prot->h.udp_table;
 	int    error = 1;
+	static DEFINE_MUTEX(bind0_mutex);
+	int mutex_acquired = 0;
 	struct net *net = sock_net(sk);
 
 	if (!snum) {
@@ -162,6 +164,8 @@ int udp_lib_get_port(struct sock *sk, unsigned short snum,
 		unsigned rand;
 		unsigned short first;
 
+		mutex_lock(&bind0_mutex);
+		mutex_acquired = 1;
 		inet_get_local_port_range(&low, &high);
 		remaining = (high - low) + 1;
 
@@ -196,6 +200,8 @@ int udp_lib_get_port(struct sock *sk, unsigned short snum,
 fail_unlock:
 	spin_unlock_bh(&hslot->lock);
 fail:
+	if (mutex_acquired)
+		mutex_unlock(&bind0_mutex);
 	return error;
 }
 

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ