lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Mon, 05 Dec 2016 13:36:24 +0100 From: Paolo Abeni <pabeni@...hat.com> To: Eric Dumazet <edumazet@...gle.com> Cc: "David S . Miller" <davem@...emloft.net>, netdev <netdev@...r.kernel.org>, Yuchung Cheng <ycheng@...gle.com>, Eric Dumazet <eric.dumazet@...il.com> Subject: Re: [PATCH v2 net-next 7/8] net: reorganize struct sock for better data locality Hi Eric, On Sat, 2016-12-03 at 11:14 -0800, Eric Dumazet wrote: > Group fields used in TX path, and keep some cache lines mostly read > to permit sharing among cpus. > > Gained two 4 bytes holes on 64bit arches. > > Added a place holder for tcp tsq_flags, next to sk_wmem_alloc > to speed up tcp_wfree() in the following patch. > > I have not added ____cacheline_aligned_in_smp, this might be done later. > I prefer doing this once inet and tcp/udp sockets reorg is also done. > > Tested with both TCP and UDP. > > UDP receiver performance under flood increased by ~20 % : > Accessing sk_filter/sk_wq/sk_napi_id no longer stalls because sk_drops > was moved away from a critical cache line, now mostly read and shared. I cherry-picked this patch only for some UDP benchmark. Under flood with many concurrent flows, I see this 20% improvement and a relevant decrease in system load. Nice work, thanks Eric! Tested-by: Paolo Abeni <pabeni@...hat.com>
Powered by blists - more mailing lists