[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210115122010.650109709@linuxfoundation.org>
Date: Fri, 15 Jan 2021 13:28:29 +0100
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Xuan Zhuo <xuanzhuo@...ux.alibaba.com>,
Magnus Karlsson <magnus.karlsson@...el.com>,
Daniel Borkmann <daniel@...earbox.net>,
Björn Töpel <bjorn.topel@...el.com>
Subject: [PATCH 5.10 096/103] xsk: Rollback reservation at NETDEV_TX_BUSY
From: Magnus Karlsson <magnus.karlsson@...el.com>
commit b1b95cb5c0a9694d47d5f845ba97e226cfda957d upstream.
Rollback the reservation in the completion ring when we get a
NETDEV_TX_BUSY. When this error is received from the driver, we are
supposed to let the user application retry the transmit again. And in
order to do this, we need to roll back the failed send so it can be
retried. Unfortunately, we did not cancel the reservation we had made
in the completion ring. By not doing this, we actually make the
completion ring one entry smaller per NETDEV_TX_BUSY error we get, and
after enough of these errors the completion ring will be of size zero
and transmit will stop working.
Fix this by cancelling the reservation when we get a NETDEV_TX_BUSY
error.
Fixes: 642e450b6b59 ("xsk: Do not discard packet when NETDEV_TX_BUSY")
Reported-by: Xuan Zhuo <xuanzhuo@...ux.alibaba.com>
Signed-off-by: Magnus Karlsson <magnus.karlsson@...el.com>
Signed-off-by: Daniel Borkmann <daniel@...earbox.net>
Acked-by: Björn Töpel <bjorn.topel@...el.com>
Link: https://lore.kernel.org/bpf/20201218134525.13119-3-magnus.karlsson@gmail.com
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
net/xdp/xsk.c | 3 +++
net/xdp/xsk_queue.h | 5 +++++
2 files changed, 8 insertions(+)
--- a/net/xdp/xsk.c
+++ b/net/xdp/xsk.c
@@ -428,6 +428,9 @@ static int xsk_generic_xmit(struct sock
if (err == NETDEV_TX_BUSY) {
/* Tell user-space to retry the send */
skb->destructor = sock_wfree;
+ spin_lock_irqsave(&xs->pool->cq_lock, flags);
+ xskq_prod_cancel(xs->pool->cq);
+ spin_unlock_irqrestore(&xs->pool->cq_lock, flags);
/* Free skb without triggering the perf drop trace */
consume_skb(skb);
err = -EAGAIN;
--- a/net/xdp/xsk_queue.h
+++ b/net/xdp/xsk_queue.h
@@ -286,6 +286,11 @@ static inline bool xskq_prod_is_full(str
return !free_entries;
}
+static inline void xskq_prod_cancel(struct xsk_queue *q)
+{
+ q->cached_prod--;
+}
+
static inline int xskq_prod_reserve(struct xsk_queue *q)
{
if (xskq_prod_is_full(q))
Powered by blists - more mailing lists