[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210311115501.uhke2wpsf6kpgexu@pengutronix.de>
Date: Thu, 11 Mar 2021 12:55:01 +0100
From: Marc Kleine-Budde <mkl@...gutronix.de>
To: Daniel Glöckner <dg@...ix.com>
Cc: netdev@...r.kernel.org, linux-can@...r.kernel.org
Subject: Re: Softirq error with mcp251xfd driver
On 10.03.2021 22:22:54, Daniel Glöckner wrote:
> On Wed, Mar 10, 2021 at 08:13:51AM +0100, Marc Kleine-Budde wrote:
> > On 10.03.2021 07:46:26, Daniel Glöckner wrote:
> > > the mcp251xfd driver uses a threaded irq handler to queue skbs with the
> > > can_rx_offload_* helpers. I get the following error on every packet until
> > > the rate limit kicks in:
> > >
> > > NOHZ tick-stop error: Non-RCU local softirq work is pending, handler
> > > #08!!!
> >
> > That's a known problem. But I had no time to investigate it.
> >
> > > Adding local_bh_disable/local_bh_enable around the can_rx_offload_* calls
> > > gets rid of the error, but is that the correct way to fix this?
> > > Internally the can_rx_offload code uses spin_lock_irqsave to safely
> > > manipulate its queue.
> >
> > The problem is not the queue handling inside of rx_offload, but the call
> > to napi_schedule(). This boils down to raising a soft IRQ (the NAPI)
> > from the threaded IRQ handler of the mcp251xfd driver.
> >
> > The local_bh_enable() "fixes" the problem running the softirq if needed.
> >
> > https://elixir.bootlin.com/linux/v5.11/source/kernel/softirq.c#L1913
> >
> > I'm not sure how to properly fix the problem, yet.
>
> If I understand correctly, the point of using can_rx_offload_* in the
> mcp251xfd driver is that it sorts the rx, tx, and error frames according
> to their timestamp.
ACK - Also using netif_rx*() doesn't guarantee packet order, but
netif_receive_skb() does.
> In that case calling local_bh_enable after each packet is not correct
> because there will never be more than one packet in the queue.
Have I understood the code correctly, will the NAPI will run
synchronously in local_bh_enable()? Will the NAPI then process other
skbs (i.e. Ethernet) in our context? That would defeat the whole purpose
of reading the CAN frames independent of the NAPI...
> We want to call local_bh_disable + can_rx_offload_schedule +
> local_bh_enable only at the end of mcp251xfd_irq after intf_pending
> indicated that there are no more packets inside the chip. How about
> adding a flag to struct can_rx_offload that suppresses the automatic
> calls to can_rx_offload_schedule?
Oleksij has a work-in-progress patch series where the
can_rx_offload_schedule() is moved out of the
can_rx_offload_queue_sorted(). This fixes a RX-before-TX problem, which
arises if the NAPI runs between RX and before the TX frames have been
added to the queue.
> If there is the risk that under high load we will never exit the loop in
> mcp251xfd_irq or if can_rx_offload_napi_poll might run again while we add
> more packets to the queue, a more complex scheme is needed.
ACK - Due to this complexity the patch series is still WIP. Maybe we can
post the series later today, so that you can pick it up.
> We could extend can_rx_offload_napi_poll to process only packets with
> a timestamp below a certain value. That value has to be read from the
> TBC register before we read the INT register.
I don't think this is a good idea, reading a register over SPI is too
expensive.
> Then the three functions can be run after each iteration to empty the
> queue. We need to update that timestamp limit one more time when we
> finally exit the loop to process those packets that have arrived after
> the reading of the TBC register when the INT register still had bits
> set. Using the timestamp of the tail of the queue is probably the
> easiest way to set the final limit.
We had the following ideas:
If no TX packages are on the fly, call napi_schedule at end of IRQ
handler loop.
If TX packages are on the fly and we have TX-completed at least one, we
can flush until and including the latest TX-completed CAN frame. If
there are no completed TX frames yet, we have to take care that the
queue doesn't grow too large. TX frames can be delayed almost
indefinitely, if they have a low priority. Always keep in mind that
reading from the controller is really expensive and we want to avoid
this at all costs.
Marc
--
Pengutronix e.K. | Marc Kleine-Budde |
Embedded Linux | https://www.pengutronix.de |
Vertretung West/Dortmund | Phone: +49-231-2826-924 |
Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 |
Download attachment "signature.asc" of type "application/pgp-signature" (489 bytes)
Powered by blists - more mailing lists