lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130627192218.GA5936@sbohrermbp13-local.rgmadvisors.com>
Date:	Thu, 27 Jun 2013 14:22:18 -0500
From:	Shawn Bohrer <sbohrer@...advisors.com>
To:	netdev@...r.kernel.org
Subject: Understanding lock contention in __udp4_lib_mcast_deliver

I'm looking for opportunities to improve the multicast receive
performance for our application, and I thought I'd spend some time
trying to understand what I thought might be a small/simple
improvement.  Profiling with perf I see that there is spin_lock
contention in __udp4_lib_mcast_deliver:

0.68%  swapper  [kernel.kallsyms]               [k] _raw_spin_lock
       |
       --- _raw_spin_lock
          |
          |--24.13%-- perf_adjust_freq_unthr_context.part.21
          |
          |--22.40%-- scheduler_tick
          |
          |--14.96%-- __udp4_lib_mcast_deliver

The lock appears to be the udp_hslot lock protecting sockets that have
been hashed based on the local port.  My application contains multiple
processes and each process listens on multiple multicast sockets.  We
open one socket per multicast addr:port per machine.  All of the local
UDP ports are unique.  The UDP hash table has 65536 entries and it
appears the hash function is simply the port number plus a
contribution from the network namespace (I believe the namespace part
should be constant since we're not not using network namespaces.
Perhaps I should disable CONFIG_NET_NS).

$ dmesg | grep "UDP hash"
[    0.606472] UDP hash table entries: 65536 (order: 9, 2097152 bytes)

At this point I'm confused why there is any contention over this
spin_lock.  We should have enough hash table entries to avoid
collisions and all of our ports are unique.  Assuming that part is
true I could imagine contention if the kernel is processing two
packets for the same multicast addr:port in parallel but is that
possible?  I'm using a Mellanox ConnectX-3 card that has multiple
receive queues but I've always thought that all packets for a given
srcip:srcport:destip:destport would all be delivered to a single queue
and thus not processed in parallel.

There is obviously something wrong with my thinking and I'd appreciate
if someone could tell me what I'm missing.

Thanks,
Shawn

-- 

---------------------------------------------------------------
This email, along with any attachments, is confidential. If you 
believe you received this message in error, please contact the 
sender immediately and delete all copies of the message.  
Thank you.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ