lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 30 Mar 2009 11:57:57 -0700
From:	Rick Jones <rick.jones2@...com>
To:	Eric Dumazet <dada1@...mosbay.com>
CC:	Patrick McHardy <kaber@...sh.net>,
	Stephen Hemminger <shemminger@...tta.com>,
	David Miller <davem@...emloft.net>, netdev@...r.kernel.org,
	netfilter-devel@...r.kernel.org
Subject: Re: [PATCH] netfilter: finer grained nf_conn locking

Eric Dumazet wrote:
> Hi Patrick
> 
> Apparently we could not finish the removal of tcp_lock for 2.6.30 :(
> 
> Stephen suggested using a 4 bytes hole in struct nf_conntrack,
> but this is ok only if sizeof(spinlock_t) <= 4 and 64 bit arches.
> 
> We could do an hybrid thing : use nf_conn.ct_general.lock if 64 bit arches
> and sizeof(spinlock_t) <= 4.
> 
> Other cases would use a carefuly sized array of spinlocks...
> 
> Thank you
> 
> [PATCH] netfilter: finer grained nf_conn locking
> 
> Introduction of fine grained lock infrastructure for nf_conn.
> If possible, we use a 32bit hole on 64bit arches.
> Else we use a global array of hashed spinlocks, so we dont
> change size of "struct nf_conn"
> 
> Get rid of central tcp_lock rwlock used in TCP conntracking
> using this infrastructure for better performance on SMP.
> 
> "tbench 8" results on my 8 core machine (32bit kernel, with
> conntracking on) : 2319 MB/s instead of 2284 MB/s

Is this an implicit request for me to try to resurrect the 32-core setup?

rick jones
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ