[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200310110412.66b60677@carbon>
Date: Tue, 10 Mar 2020 11:04:12 +0100
From: Jesper Dangaard Brouer <brouer@...hat.com>
To: Saeed Mahameed <saeedm@...lanox.com>
Cc: "jonathan.lemon@...il.com" <jonathan.lemon@...il.com>,
"davem@...emloft.net" <davem@...emloft.net>,
"kernel-team@...com" <kernel-team@...com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"ilias.apalodimas@...aro.org" <ilias.apalodimas@...aro.org>,
brouer@...hat.com, Li RongQing <lirongqing@...du.com>
Subject: Re: [PATCH] page_pool: use irqsave/irqrestore to protect ring
access.
On Tue, 10 Mar 2020 02:30:34 +0000
Saeed Mahameed <saeedm@...lanox.com> wrote:
> On Mon, 2020-03-09 at 17:55 -0700, David Miller wrote:
> > From: Jonathan Lemon <jonathan.lemon@...il.com>
> > Date: Mon, 9 Mar 2020 12:49:29 -0700
> >
> > > netpoll may be called from IRQ context, which may access the
> > > page pool ring. The current _bh variants do not provide sufficient
> > > protection, so use irqsave/restore instead.
> > >
> > > Error observed on a modified mlx4 driver, but the code path exists
> > > for any driver which calls page_pool_recycle from napi poll.
Netpoll calls into drivers are problematic, nasty and annoying. Drivers
usually catch netpoll calls via seeing NAPI budget is zero, and handle
the situation inside the driver e.g.[1][2]. (even napi_consume_skb
catch it this way).
[1] https://github.com/torvalds/linux/blob/master/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c#L3179
[2] https://github.com/torvalds/linux/blob/master/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c#L1110
> > >
> > > WARNING: CPU: 34 PID: 550248 at /ro/source/kernel/softirq.c:161
> > __local_bh_enable_ip+0x35/0x50
> > ...
> > > Signed-off-by: Jonathan Lemon <jonathan.lemon@...il.com>
> >
> > The netpoll stuff always makes the locking more complicated than it
> > needs to be. I wonder if there is another way around this issue?
> >
> > Because IRQ save/restore is a high cost to pay in this critical path.
Yes, huge NACK from me, this would kill performance. We need to find
another way.
> a printk inside irq context lead to this, so maybe it can be avoided ..
>
> or instead of checking in_serving_softirq() change page_pool to
> check in_interrupt() which is more powerful, to avoid ptr_ring locking
> and the complication with netpoll altogether.
>
> I wonder why Jesper picked in_serving_softirq() in first place, was
> there a specific reason ? or he just wanted it to be as less strict as
> possible ?
I wanted to make it specific that this is optimized for softirq.
I actually think this is a regression we have introduced recently in
combined patches:
304db6cb7610 ("page_pool: refill page when alloc.count of pool is zero")
44768decb7c0 ("page_pool: handle page recycle for NUMA_NO_NODE condition")
I forgot that the in_serving_softirq() check was also protecting us
when getting called from netpoll. The general idea before was that if
the in_serving_softirq() check failed, then we falled-through to
slow-path calling normal alloc_pages_node (regular page allocator).
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
Powered by blists - more mailing lists