lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <472A2251.4000701@cosmosbay.com>
Date:	Thu, 01 Nov 2007 20:00:33 +0100
From:	Eric Dumazet <dada1@...mosbay.com>
To:	Rick Jones <rick.jones2@...com>
CC:	Stephen Hemminger <shemminger@...ux-foundation.org>,
	"David S. Miller" <davem@...emloft.net>,
	Linux Netdev List <netdev@...r.kernel.org>,
	Andi Kleen <ak@...e.de>,
	Arnaldo Carvalho de Melo <acme@...hat.com>
Subject: Re: [PATCH] INET : removes per bucket rwlock in tcp/dccp ehash table

Rick Jones a écrit :
> Eric Dumazet wrote:
>> Stephen Hemminger a écrit :
>>
>>> On Thu, 01 Nov 2007 11:16:20 +0100
>>> Eric Dumazet <dada1@...mosbay.com> wrote:
>>>
>>>> As done two years ago on IP route cache table (commit 
>>>> 22c047ccbc68fa8f3fa57f0e8f906479a062c426) , we can avoid using one 
>>>> lock per hash bucket for the huge TCP/DCCP hash tables.
> 
> The TCP hashes are looked at with higher frequency than the route cache 
> yes?

It depends on the workload, but in general I would say the reverse.

> 
>>>> On a typical x86_64 platform, this saves about 2MB or 4MB of ram, 
>>>> for litle performance differences. (we hit a different cache line 
>>>> for the rwlock, but then the bucket cache line have a better sharing 
>>>> factor among cpus, since we dirty it less often)
>>>>
>>>> Using a 'small' table of hashed rwlocks should be more than enough 
>>>> to provide correct SMP concurrency between different buckets, 
>>>> without using too much memory. Sizing of this table depends on 
>>>> NR_CPUS and various CONFIG settings.
> 
> Something is telling me finding a 64 core system with a suitable 
> workload to try this could be a good thing.  Wish I had one at my disposal.

If you find one, please give it to me when you finished playing^Wworking with 
it :)


-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ