[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20240208-upstream-net-20240202-locking-cleanup-misc-v1-1-f75cc5b97e5a@kernel.org>
Date: Thu, 08 Feb 2024 19:03:49 +0100
From: "Matthieu Baerts (NGI0)" <matttbe@...nel.org>
To: mptcp@...ts.linux.dev, Mat Martineau <martineau@...nel.org>,
Geliang Tang <geliang@...nel.org>, "David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>, Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>, Kishen Maloor <kishen.maloor@...el.com>,
Florian Westphal <fw@...len.de>,
Peter Krystad <peter.krystad@...ux.intel.com>,
Dmytro Shytyi <dmytro@...tyi.net>,
Benjamin Hesmans <benjamin.hesmans@...sares.net>
Cc: netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
Christoph Paasch <cpaasch@...le.com>,
"Matthieu Baerts (NGI0)" <matttbe@...nel.org>, stable@...r.kernel.org
Subject: [PATCH net 1/7] mptcp: drop the push_pending field
From: Paolo Abeni <pabeni@...hat.com>
Such field is there to avoid acquiring the data lock in a few spots,
but it adds complexity to the already non trivial locking schema.
All the relevant call sites (mptcp-level re-injection, set socket
options), are slow-path, drop such field in favor of 'cb_flags', adding
the relevant locking.
This patch could be seen as an improvement, instead of a fix. But it
simplifies the next patch. The 'Fixes' tag has been added to help having
this series backported to stable.
Fixes: e9d09baca676 ("mptcp: avoid atomic bit manipulation when possible")
Cc: stable@...r.kernel.org
Signed-off-by: Paolo Abeni <pabeni@...hat.com>
Reviewed-by: Mat Martineau <martineau@...nel.org>
Signed-off-by: Matthieu Baerts (NGI0) <matttbe@...nel.org>
---
net/mptcp/protocol.c | 12 ++++++------
net/mptcp/protocol.h | 1 -
2 files changed, 6 insertions(+), 7 deletions(-)
diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
index 028e8b473626..2111819016af 100644
--- a/net/mptcp/protocol.c
+++ b/net/mptcp/protocol.c
@@ -1505,8 +1505,11 @@ static void mptcp_update_post_push(struct mptcp_sock *msk,
void mptcp_check_and_set_pending(struct sock *sk)
{
- if (mptcp_send_head(sk))
- mptcp_sk(sk)->push_pending |= BIT(MPTCP_PUSH_PENDING);
+ if (mptcp_send_head(sk)) {
+ mptcp_data_lock(sk);
+ mptcp_sk(sk)->cb_flags |= BIT(MPTCP_PUSH_PENDING);
+ mptcp_data_unlock(sk);
+ }
}
static int __subflow_push_pending(struct sock *sk, struct sock *ssk,
@@ -3142,7 +3145,6 @@ static int mptcp_disconnect(struct sock *sk, int flags)
mptcp_destroy_common(msk, MPTCP_CF_FASTCLOSE);
WRITE_ONCE(msk->flags, 0);
msk->cb_flags = 0;
- msk->push_pending = 0;
msk->recovery = false;
msk->can_ack = false;
msk->fully_established = false;
@@ -3330,8 +3332,7 @@ static void mptcp_release_cb(struct sock *sk)
struct mptcp_sock *msk = mptcp_sk(sk);
for (;;) {
- unsigned long flags = (msk->cb_flags & MPTCP_FLAGS_PROCESS_CTX_NEED) |
- msk->push_pending;
+ unsigned long flags = (msk->cb_flags & MPTCP_FLAGS_PROCESS_CTX_NEED);
struct list_head join_list;
if (!flags)
@@ -3347,7 +3348,6 @@ static void mptcp_release_cb(struct sock *sk)
* datapath acquires the msk socket spinlock while helding
* the subflow socket lock
*/
- msk->push_pending = 0;
msk->cb_flags &= ~flags;
spin_unlock_bh(&sk->sk_lock.slock);
diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
index 3517f2d24a22..b905f1868298 100644
--- a/net/mptcp/protocol.h
+++ b/net/mptcp/protocol.h
@@ -286,7 +286,6 @@ struct mptcp_sock {
int rmem_released;
unsigned long flags;
unsigned long cb_flags;
- unsigned long push_pending;
bool recovery; /* closing subflow write queue reinjected */
bool can_ack;
bool fully_established;
--
2.43.0
Powered by blists - more mailing lists