lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1525614224.300611.1362511632.7D50FB8C@webmail.messagingengine.com>
Date:   Sun, 06 May 2018 09:43:44 -0400
From:   "Jacob S. Moroni" <mail@...emoroni.com>
To:     netdev@...r.kernel.org
Subject: Locking in network code

Hello,

I have a stupid question regarding which variant of spin_lock to use
throughout the network stack, and inside RX handlers specifically.

It's my understanding that skbuffs are normally passed into the stack
from soft IRQ context if the device is using NAPI, and hard IRQ
context if it's not using NAPI (and I guess process context too if the
driver does it's own workqueue thing). 

So, that means that handlers registered with netdev_rx_handler_register
may end up being called from any context.

However, the RX handler in the macvlan code calls ip_check_defrag,
which could eventually lead to a call to ip_defrag, which ends
up taking a regular spin_lock around the call to ip_frag_queue.

Is this a risk of deadlock, and if not, why?

What if you're running a system with one CPU and a packet fragment
arrives on a NAPI interface, then, while the spin_lock is held,
another fragment somehow arrives on another interface which does
its processing in hard IRQ context?

-- 
  Jacob S. Moroni
  mail@...emoroni.com

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ