lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20180507074841.6fbdcfac@xeon-e3>
Date:   Mon, 7 May 2018 07:48:41 -0700
From:   Stephen Hemminger <stephen@...workplumber.org>
To:     Alexander Duyck <alexander.duyck@...il.com>
Cc:     "Jacob S. Moroni" <mail@...emoroni.com>,
        Netdev <netdev@...r.kernel.org>
Subject: Re: Locking in network code

On Sun, 6 May 2018 09:16:26 -0700
Alexander Duyck <alexander.duyck@...il.com> wrote:

> On Sun, May 6, 2018 at 6:43 AM, Jacob S. Moroni <mail@...emoroni.com> wrote:
> > Hello,
> >
> > I have a stupid question regarding which variant of spin_lock to use
> > throughout the network stack, and inside RX handlers specifically.
> >
> > It's my understanding that skbuffs are normally passed into the stack
> > from soft IRQ context if the device is using NAPI, and hard IRQ
> > context if it's not using NAPI (and I guess process context too if the
> > driver does it's own workqueue thing).
> >
> > So, that means that handlers registered with netdev_rx_handler_register
> > may end up being called from any context.  
> 
> I am pretty sure the Rx handlers are all called from softirq context.
> The hard IRQ will just call netif_rx which will queue the packet up to
> be handles in the soft IRQ later.

The only exception is the netpoll code which runs stack in hardirq context.

> > However, the RX handler in the macvlan code calls ip_check_defrag,
> > which could eventually lead to a call to ip_defrag, which ends
> > up taking a regular spin_lock around the call to ip_frag_queue.
> >
> > Is this a risk of deadlock, and if not, why?
> >
> > What if you're running a system with one CPU and a packet fragment
> > arrives on a NAPI interface, then, while the spin_lock is held,
> > another fragment somehow arrives on another interface which does
> > its processing in hard IRQ context?
> >
> > --
> >   Jacob S. Moroni
> >   mail@...emoroni.com  
> 
> Take a look at the netif_rx code and it should answer most of your
> questions. Basically everything is handed off from the hard IRQ to the
> soft IRQ via a backlog queue and then handled in net_rx_action.
> 
> - Alex

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ