lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <alpine.DEB.2.00.1010120745360.31832@router.home> Date: Tue, 12 Oct 2010 07:50:27 -0500 (CDT) From: Christoph Lameter <cl@...ux.com> To: Pekka Enberg <penberg@...helsinki.fi> cc: Andrew Morton <akpm@...ux-foundation.org>, Eric Dumazet <eric.dumazet@...il.com>, David Miller <davem@...emloft.net>, netdev <netdev@...r.kernel.org>, Michael Chan <mchan@...adcom.com>, Eilon Greenstein <eilong@...adcom.com>, Christoph Hellwig <hch@....de>, David Rientjes <rientjes@...gle.com>, LKML <linux-kernel@...r.kernel.org>, Nick Piggin <npiggin@...nel.dk> Subject: Re: [PATCH net-next] net: allocate skbs on local node On Tue, 12 Oct 2010, Pekka Enberg wrote: > There's little point in discussing the removal of SLAB as long as there are > performance regressions for real workloads from people who are willing to > share results and test patches. I'm optimistic that we'll be able to try > removing SLAB some time next year unless something interesting pops up... Hmmm. Given these effects I think we should be more cautious regarding the unification work. May be the "unified allocator" should replace SLAB instead and SLUB can stay unchanged? The unification patches go back to the one lock per node SLAB thing because the queue maintenance overhead is otherwise causing large regressions in hackbench because of lots of atomic ops. The per node lock seem to be causing problems here in the network stack,. Take the unified as a SLAB cleanup instead? Then at least we have a large common code base and just differentiate through the locking mechanism? -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists