lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Sun, 04 Nov 2007 18:58:47 +0100 From: Jarek Poplawski <jarkao2@...pl> To: Eric Dumazet <dada1@...mosbay.com> CC: David Miller <davem@...emloft.net>, ak@...e.de, netdev@...r.kernel.org, acme@...hat.com Subject: Re: [PATCH] INET : removes per bucket rwlock in tcp/dccp ehash table Eric Dumazet wrote, On 11/04/2007 12:31 PM: > David Miller a écrit : >> From: Andi Kleen <ak@...e.de> >> Date: Sun, 4 Nov 2007 00:18:14 +0100 >> >>> On Thursday 01 November 2007 11:16:20 Eric Dumazet wrote: ... >>> Also the EHASH_LOCK_SZ == 0 special case is a little strange. Why did >>> you add that? >> He explained this in another reply, because ifdefs are ugly. But I hope he was only joking, didn't he? Let's make it clear: ifdefs are in K&R, so they are very nice! Just like all C! (K, &, and R as well.) You know, I can even imagine, there are people, who have K&R around their beds, instead of some other book, so they could be serious about such things. (But, don't worry, it's not me - happily I'm not serious!) This patch looks OK now, but a bit of grumbling shouldn't harm?: ... > [PATCH] INET : removes per bucket rwlock in tcp/dccp ehash table > > As done two years ago on IP route cache table (commit > 22c047ccbc68fa8f3fa57f0e8f906479a062c426) , we can avoid using one lock per > hash bucket for the huge TCP/DCCP hash tables. > > On a typical x86_64 platform, this saves about 2MB or 4MB of ram, for litle - litle + little ... > +static inline int inet_ehash_locks_alloc(struct inet_hashinfo *hashinfo) > +{ > + unsigned int i, size = 256; > +#if defined(CONFIG_PROVE_LOCKING) > + unsigned int nr_pcpus = 2; > +#else > + unsigned int nr_pcpus = num_possible_cpus(); > +#endif > + if (nr_pcpus >= 4) > + size = 512; > + if (nr_pcpus >= 8) > + size = 1024; > + if (nr_pcpus >= 16) > + size = 2048; > + if (nr_pcpus >= 32) > + size = 4096; It seems, maybe in the future this could look a bit nicer with some log type shifting. > + if (sizeof(rwlock_t) != 0) { > +#ifdef CONFIG_NUMA > + if (size * sizeof(rwlock_t) > PAGE_SIZE) > + hashinfo->ehash_locks = vmalloc(size * sizeof(rwlock_t)); > + else > +#endif > + hashinfo->ehash_locks = kmalloc(size * sizeof(rwlock_t), > + GFP_KERNEL); > + if (!hashinfo->ehash_locks) > + return ENOMEM; Probably doesn't matter now, but maybe more common?: return -ENOMEM; > + for (i = 0; i < size; i++) > + rwlock_init(&hashinfo->ehash_locks[i]); This looks better now, but still is doubtful to me: even if it's safe with current rwlock implementation, can't we imagine some new debugging or statistical code added, which would be called from rwlock_init() without using rwlock_t structure? IMHO, if read_lock() etc. are called in such a case, rwlock_init() should be done as well. Regards, Jarek P. - To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists