[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <773fd8246a1ec4ef79142d9e31b8ba4163a0d496.camel@gmx.de>
Date: Fri, 19 Nov 2021 15:41:25 +0100
From: Mike Galbraith <efault@....de>
To: Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Thomas Gleixner <tglx@...utronix.de>
Cc: LKML <linux-kernel@...r.kernel.org>,
RT <linux-rt-users@...r.kernel.org>
Subject: [rcf/patch] netpoll: Make it RT friendly
On Thu, 2021-11-18 at 17:34 +0100, Sebastian Andrzej Siewior wrote:
> Dear RT folks!
>
> I'm pleased to announce the v5.16-rc1-rt2 patch set.
>
> Changes since v5.16-rc1-rt1:
>
> - Redo the delayed deallocation of the task-stack.
>
> Known issues
> - netconsole triggers WARN.
The below seems to do the trick for netconsole in master.today-rt.
netpoll: Make it RT friendly
PREEMPT_RT cannot alloc/free memory when not preemptible, making
zap_completion_queue() disabling preemption across kfree() an
issue, as well as disabling IRQs asross __netpoll_send_skb(),
which leads to zap_completion_queue() and our pesky kfree().
We can let rcu_read_lock_bh() provide local exclusion for RT across
__netpoll_send_skb() (via softirq_ctrl.lock) instead of disabling
IRQs, and since zap_completion_queue() replaces sd->completion_queue
with IRQs disabled and makes a private copy, there's no need to
keep preemption disabled during traversal/freeing, so put_cpu_var()
before doing so. Disable a couple warnings for RT, and we're done.
Signed-off-by: Mike Galbraith <efault@....de>
---
net/core/netpoll.c | 23 +++++++++++++++--------
1 file changed, 15 insertions(+), 8 deletions(-)
--- a/net/core/netpoll.c
+++ b/net/core/netpoll.c
@@ -252,6 +252,7 @@ static void zap_completion_queue(void)
clist = sd->completion_queue;
sd->completion_queue = NULL;
local_irq_restore(flags);
+ put_cpu_var(softnet_data);
while (clist != NULL) {
struct sk_buff *skb = clist;
@@ -263,9 +264,8 @@ static void zap_completion_queue(void)
__kfree_skb(skb);
}
}
- }
-
- put_cpu_var(softnet_data);
+ } else
+ put_cpu_var(softnet_data);
}
static struct sk_buff *find_skb(struct netpoll *np, int len, int reserve)
@@ -314,7 +314,8 @@ static netdev_tx_t __netpoll_send_skb(st
/* It is up to the caller to keep npinfo alive. */
struct netpoll_info *npinfo;
- lockdep_assert_irqs_disabled();
+ if (!IS_ENABLED(CONFIG_PREEMPT_RT))
+ lockdep_assert_irqs_disabled();
dev = np->dev;
npinfo = rcu_dereference_bh(dev->npinfo);
@@ -350,7 +351,7 @@ static netdev_tx_t __netpoll_send_skb(st
udelay(USEC_PER_POLL);
}
- WARN_ONCE(!irqs_disabled(),
+ WARN_ONCE(!IS_ENABLED(CONFIG_PREEMPT_RT) && !irqs_disabled(),
"netpoll_send_skb_on_dev(): %s enabled interrupts in poll (%pS)\n",
dev->name, dev->netdev_ops->ndo_start_xmit);
@@ -365,16 +366,22 @@ static netdev_tx_t __netpoll_send_skb(st
netdev_tx_t netpoll_send_skb(struct netpoll *np, struct sk_buff *skb)
{
- unsigned long flags;
+ unsigned long __maybe_unused flags;
netdev_tx_t ret;
if (unlikely(!np)) {
dev_kfree_skb_irq(skb);
ret = NET_XMIT_DROP;
} else {
- local_irq_save(flags);
+ if (!IS_ENABLED(CONFIG_PREEMPT_RT))
+ local_irq_save(flags);
+ else
+ rcu_read_lock_bh();
ret = __netpoll_send_skb(np, skb);
- local_irq_restore(flags);
+ if (!IS_ENABLED(CONFIG_PREEMPT_RT))
+ local_irq_restore(flags);
+ else
+ rcu_read_unlock_bh();
}
return ret;
}
Powered by blists - more mailing lists