[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210520092052.506301027@linuxfoundation.org>
Date: Thu, 20 May 2021 11:22:28 +0200
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org,
Magnus Karlsson <magnus.karlsson@...el.com>,
Alexei Starovoitov <ast@...nel.org>,
Sasha Levin <sashal@...nel.org>
Subject: [PATCH 5.4 07/37] xsk: Simplify detection of empty and full rings
From: Magnus Karlsson <magnus.karlsson@...el.com>
[ Upstream commit 11cc2d21499cabe7e7964389634ed1de3ee91d33 ]
In order to set the correct return flags for poll, the xsk code has to
check if the Rx queue is empty and if the Tx queue is full. This code
was unnecessarily large and complex as it used the functions that are
used to update the local state from the global state (xskq_nb_free and
xskq_nb_avail). Since we are not doing this nor updating any data
dependent on this state, we can simplify the functions. Another
benefit from this is that we can also simplify the xskq_nb_free and
xskq_nb_avail functions in a later commit.
Signed-off-by: Magnus Karlsson <magnus.karlsson@...el.com>
Signed-off-by: Alexei Starovoitov <ast@...nel.org>
Link: https://lore.kernel.org/bpf/1576759171-28550-3-git-send-email-magnus.karlsson@intel.com
Signed-off-by: Sasha Levin <sashal@...nel.org>
---
net/xdp/xsk_queue.h | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/net/xdp/xsk_queue.h b/net/xdp/xsk_queue.h
index eddae4688862..ee3f8c857dd8 100644
--- a/net/xdp/xsk_queue.h
+++ b/net/xdp/xsk_queue.h
@@ -363,12 +363,15 @@ static inline void xskq_produce_flush_desc(struct xsk_queue *q)
static inline bool xskq_full_desc(struct xsk_queue *q)
{
- return xskq_nb_avail(q, q->nentries) == q->nentries;
+ /* No barriers needed since data is not accessed */
+ return READ_ONCE(q->ring->producer) - READ_ONCE(q->ring->consumer) ==
+ q->nentries;
}
static inline bool xskq_empty_desc(struct xsk_queue *q)
{
- return xskq_nb_free(q, q->prod_tail, q->nentries) == q->nentries;
+ /* No barriers needed since data is not accessed */
+ return READ_ONCE(q->ring->consumer) == READ_ONCE(q->ring->producer);
}
void xskq_set_umem(struct xsk_queue *q, u64 size, u64 chunk_mask);
--
2.30.2
Powered by blists - more mailing lists