[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACVy4SUkfn4642Vne=c1yuWhne=2cutPZQ5XeXz_QBz1g67CrA@mail.gmail.com>
Date: Tue, 22 Oct 2019 17:22:04 -0700
From: Tom Rix <trix@...hat.com>
To: Steffen Klassert <steffen.klassert@...unet.com>,
herbert@...dor.apana.org.au, davem@...emloft.net,
netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
Joerg Vehlow <lkml@...coder.de>
Subject: [PATCH v2 1/1] xfrm : lock input tasklet skb queue
On PREEMPT_RT_FULL while running netperf, a corruption
of the skb queue causes an oops.
This appears to be caused by a race condition here
__skb_queue_tail(&trans->queue, skb);
tasklet_schedule(&trans->tasklet);
Where the queue is changed before the tasklet is locked by
tasklet_schedule.
The fix is to use the skb queue lock.
This is the original work of Joerg Vehlow <joerg.vehlow@...-tech.de>
https://lkml.org/lkml/2019/9/9/111
xfrm_input: Protect queue with lock
During the skb_queue_splice_init the tasklet could have been preempted
and __skb_queue_tail called, which led to an inconsistent queue.
ifdefs for CONFIG_PREEMPT_RT_FULL added to reduce runtime effects
on the normal kernel.
Signed-off-by: Tom Rix <trix@...hat.com>
---
net/xfrm/xfrm_input.c | 18 +++++++++++++++++-
1 file changed, 17 insertions(+), 1 deletion(-)
diff --git a/net/xfrm/xfrm_input.c b/net/xfrm/xfrm_input.c
index 9b599ed66d97..decd515f84cf 100644
--- a/net/xfrm/xfrm_input.c
+++ b/net/xfrm/xfrm_input.c
@@ -755,13 +755,21 @@ EXPORT_SYMBOL(xfrm_input_resume);
static void xfrm_trans_reinject(unsigned long data)
{
+#ifdef CONFIG_PREEMPT_RT_FULL
+ unsigned long flags;
+#endif
struct xfrm_trans_tasklet *trans = (void *)data;
struct sk_buff_head queue;
struct sk_buff *skb;
__skb_queue_head_init(&queue);
+#ifdef CONFIG_PREEMPT_RT_FULL
+ spin_lock_irqsave(&trans->queue.lock, flags);
+#endif
skb_queue_splice_init(&trans->queue, &queue);
-
+#ifdef CONFIG_PREEMPT_RT_FULL
+ spin_unlock_irqrestore(&trans->queue.lock, flags);
+#endif
while ((skb = __skb_dequeue(&queue)))
XFRM_TRANS_SKB_CB(skb)->finish(dev_net(skb->dev), NULL, skb);
}
@@ -778,7 +786,11 @@ int xfrm_trans_queue(struct sk_buff *skb,
return -ENOBUFS;
XFRM_TRANS_SKB_CB(skb)->finish = finish;
+#ifdef CONFIG_PREEMPT_RT_FULL
+ skb_queue_tail(&trans->queue, skb);
+#else
__skb_queue_tail(&trans->queue, skb);
+#endif
tasklet_schedule(&trans->tasklet);
return 0;
}
@@ -798,7 +810,11 @@ void __init xfrm_input_init(void)
struct xfrm_trans_tasklet *trans;
trans = &per_cpu(xfrm_trans_tasklet, i);
+#ifdef CONFIG_PREEMPT_RT_FULL
+ skb_queue_head_init(&trans->queue);
+#else
__skb_queue_head_init(&trans->queue);
+#endif
tasklet_init(&trans->tasklet, xfrm_trans_reinject,
(unsigned long)trans);
}
--
2.23.0
Powered by blists - more mailing lists