[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKgT0UddSGs-d0cbQV4YN8RLEqa478C7eG3HNFf1Y-yivWPUFw@mail.gmail.com>
Date: Sun, 6 May 2018 09:16:26 -0700
From: Alexander Duyck <alexander.duyck@...il.com>
To: "Jacob S. Moroni" <mail@...emoroni.com>
Cc: Netdev <netdev@...r.kernel.org>
Subject: Re: Locking in network code
On Sun, May 6, 2018 at 6:43 AM, Jacob S. Moroni <mail@...emoroni.com> wrote:
> Hello,
>
> I have a stupid question regarding which variant of spin_lock to use
> throughout the network stack, and inside RX handlers specifically.
>
> It's my understanding that skbuffs are normally passed into the stack
> from soft IRQ context if the device is using NAPI, and hard IRQ
> context if it's not using NAPI (and I guess process context too if the
> driver does it's own workqueue thing).
>
> So, that means that handlers registered with netdev_rx_handler_register
> may end up being called from any context.
I am pretty sure the Rx handlers are all called from softirq context.
The hard IRQ will just call netif_rx which will queue the packet up to
be handles in the soft IRQ later.
> However, the RX handler in the macvlan code calls ip_check_defrag,
> which could eventually lead to a call to ip_defrag, which ends
> up taking a regular spin_lock around the call to ip_frag_queue.
>
> Is this a risk of deadlock, and if not, why?
>
> What if you're running a system with one CPU and a packet fragment
> arrives on a NAPI interface, then, while the spin_lock is held,
> another fragment somehow arrives on another interface which does
> its processing in hard IRQ context?
>
> --
> Jacob S. Moroni
> mail@...emoroni.com
Take a look at the netif_rx code and it should answer most of your
questions. Basically everything is handed off from the hard IRQ to the
soft IRQ via a backlog queue and then handled in net_rx_action.
- Alex
Powered by blists - more mailing lists