[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <01100196ba916f60-f2642e95-026a-4ba3-bd32-f871d781c2d6-000000@eu-north-1.amazonses.com>
Date: Sat, 10 May 2025 14:20:15 +0000
From: Ozgur Kara <ozgur@...sey.org>
To: John Fastabend <john.fastabend@...il.com>,
Jakub Sitnicki <jakub@...udflare.com>,
Kuniyuki Iwashima <kuniyu@...zon.com>
Cc: "David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>, Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>, Simon Horman <horms@...nel.org>,
netdev@...r.kernel.org, bpf@...r.kernel.org,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: [PATCH] net: fix unix socket bpf implementation: ensure reliable
wake-up signaling
From: Ozgur Kara <ozgur@...sey.org>
This patch addresses a race condition in the unix socket bpf
implementation where wake-up signals could be missed. specifically,
after releasing mutex (`mutex_unlock(&u->iolock)`) and before
acquiring it again (`mutex_lock(&u->iolock)`) another thread can
insert data and send a wake-up signal. if this signal occurs before
`wait_woken()` is called, it may be lost and cause the thread to
remain unnecessarily blocked.
to fix this patch introduces a safer wait mechanism using
`prepare_to_wait()` and `finish_wait()` which ensures that the wakeup
signal is not missed. this prevents unnecessary blocking and reduces
the risk of potential deadlocks in high-load or multi-processor
environments.
such race conditions can lead to performance degradation or, in rare
cases, deadlocks, especially under heavy load or on multi-cpu systems
where the problem may be difficult to reproduce.
also there was a space in the last line so i added a checkpatch correction :)
Signed-off-by: Ozgur Kara <ozgur@...sey.org>
--
diff --git a/net/unix/unix_bpf.c b/net/unix/unix_bpf.c
index e0d30d6d22ac..04f2b38803d2 100644
--- a/net/unix/unix_bpf.c
+++ b/net/unix/unix_bpf.c
@@ -26,14 +26,29 @@ static int unix_msg_wait_data(struct sock *sk,
struct sk_psock *psock,
if (!timeo)
return ret;
+ /* wait queue is waited */
add_wait_queue(sk_sleep(sk), &wait);
sk_set_bit(SOCKWQ_ASYNC_WAITDATA, sk);
+
+ /* control while locked */
if (!unix_sk_has_data(sk, psock)) {
+ set_current_state(TASK_INTERRUPTIBLE);
mutex_unlock(&u->iolock);
- wait_woken(&wait, TASK_INTERRUPTIBLE, timeo);
+
+ if (!schedule_timeout(timeo))
+ ret = 0; /* timeout set */
+ else
+ ret = signal_pending(current) ? -ERESTARTSYS : 1;
+
mutex_lock(&u->iolock);
- ret = unix_sk_has_data(sk, psock);
+
+ if (ret > 0)
+ ret = unix_sk_has_data(sk, psock);
+ } else {
+ ret = 1; /* return data */
}
+
+ __set_current_state(TASK_RUNNING);
sk_clear_bit(SOCKWQ_ASYNC_WAITDATA, sk);
remove_wait_queue(sk_sleep(sk), &wait);
return ret;
@@ -198,5 +213,4 @@ void __init unix_bpf_build_proto(void)
{
unix_dgram_bpf_rebuild_protos(&unix_dgram_bpf_prot, &unix_dgram_proto);
unix_stream_bpf_rebuild_protos(&unix_stream_bpf_prot,
&unix_stream_proto);
-
}
--
Powered by blists - more mailing lists