[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9b09170da05fb59bde7b003be282dfa63edd969e.camel@mellanox.com>
Date: Tue, 10 Mar 2020 02:30:34 +0000
From: Saeed Mahameed <saeedm@...lanox.com>
To: "jonathan.lemon@...il.com" <jonathan.lemon@...il.com>,
"davem@...emloft.net" <davem@...emloft.net>
CC: "kernel-team@...com" <kernel-team@...com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"ilias.apalodimas@...aro.org" <ilias.apalodimas@...aro.org>,
"brouer@...hat.com" <brouer@...hat.com>
Subject: Re: [PATCH] page_pool: use irqsave/irqrestore to protect ring access.
On Mon, 2020-03-09 at 17:55 -0700, David Miller wrote:
> From: Jonathan Lemon <jonathan.lemon@...il.com>
> Date: Mon, 9 Mar 2020 12:49:29 -0700
>
> > netpoll may be called from IRQ context, which may access the
> > page pool ring. The current _bh variants do not provide sufficient
> > protection, so use irqsave/restore instead.
> >
> > Error observed on a modified mlx4 driver, but the code path exists
> > for any driver which calls page_pool_recycle from napi poll.
> >
> > WARNING: CPU: 34 PID: 550248 at /ro/source/kernel/softirq.c:161
> __local_bh_enable_ip+0x35/0x50
> ...
> > Signed-off-by: Jonathan Lemon <jonathan.lemon@...il.com>
>
> The netpoll stuff always makes the locking more complicated than it
> needs
> to be. I wonder if there is another way around this issue?
>
> Because IRQ save/restore is a high cost to pay in this critical path.
a printk inside irq context lead to this, so maybe it can be avoided ..
or instead of checking in_serving_softirq() change page_pool to
check in_interrupt() which is more powerful, to avoid ptr_ring locking
and the complication with netpoll altogether.
I wonder why Jesper picked in_serving_softirq() in first place, was
there a specific reason ? or he just wanted it to be as less strict as
possible ?
Powered by blists - more mailing lists