lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Mon, 3 Mar 2008 13:32:54 -0800 (PST) From: Christoph Lameter <clameter@....com> To: Pekka Enberg <penberg@...helsinki.fi> cc: Nick Piggin <npiggin@...e.de>, netdev@...r.kernel.org, Linux Kernel Mailing List <linux-kernel@...r.kernel.org>, yanmin_zhang@...ux.intel.com, David Miller <davem@...emloft.net>, Eric Dumazet <dada1@...mosbay.com> Subject: Re: [rfc][patch 1/3] slub: fix small HWCACHE_ALIGN alignment On Mon, 3 Mar 2008, Pekka Enberg wrote: > Well, not my definition either but SLAB has guaranteed that for small > objects in the past, so I think Nick has a point here. However, with > all this back and forth, I've lost track why this matters. I suppose > it causes regression on some workload? Well the guarantee can only be exploited if you would check the cacheline sizes and the object size from the code that creates the slab cache. Basically you would have to guestimate what the slab allocator is doing. So the guarantee is basically meaningless. If the object is larger than a cacheline then this will never work. -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists