[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240830082518.23243-1-Tze-nan.Wu@mediatek.com>
Date: Fri, 30 Aug 2024 16:25:17 +0800
From: Tze-nan Wu <Tze-nan.Wu@...iatek.com>
To: <netdev@...r.kernel.org>, <bpf@...r.kernel.org>, Alexei Starovoitov
<ast@...nel.org>, Stanislav Fomichev <sdf@...ichev.me>,
<alexei.starovoitov@...il.com>
CC: <bobule.chang@...iatek.com>, <wsd_upstream@...iatek.com>,
<linux-kernel@...r.kernel.org>, <linux-mediatek@...ts.infradead.org>,
Kuniyuki Iwashima <kuniyu@...zon.com>, <chen-yao.chang@...iatek.com>, Tze-nan
Wu <Tze-nan.Wu@...iatek.com>, Yanghui Li <yanghui.li@...iatek.com>, Cheng-Jui
Wang <cheng-jui.wang@...iatek.com>, Daniel Borkmann <daniel@...earbox.net>,
John Fastabend <john.fastabend@...il.com>, Andrii Nakryiko
<andrii@...nel.org>, Martin KaFai Lau <martin.lau@...ux.dev>, Eduard
Zingerman <eddyz87@...il.com>, Song Liu <song@...nel.org>, Yonghong Song
<yonghong.song@...ux.dev>, KP Singh <kpsingh@...nel.org>, Hao Luo
<haoluo@...gle.com>, Jiri Olsa <jolsa@...nel.org>, "David S. Miller"
<davem@...emloft.net>, Eric Dumazet <edumazet@...gle.com>, Jakub Kicinski
<kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>, Matthias Brugger
<matthias.bgg@...il.com>, AngeloGioacchino Del Regno
<angelogioacchino.delregno@...labora.com>,
<linux-arm-kernel@...ts.infradead.org>
Subject: [PATCH net v5] bpf, net: Fix a potential race in do_sock_getsockopt()
There's a potential race when `cgroup_bpf_enabled(CGROUP_GETSOCKOPT)` is
false during the execution of `BPF_CGROUP_GETSOCKOPT_MAX_OPTLEN`, but
becomes true when `BPF_CGROUP_RUN_PROG_GETSOCKOPT` is called.
This inconsistency can lead to `BPF_CGROUP_RUN_PROG_GETSOCKOPT` receiving
an "-EFAULT" from `__cgroup_bpf_run_filter_getsockopt(max_optlen=0)`.
Scenario shown as below:
`process A` `process B`
----------- ------------
BPF_CGROUP_GETSOCKOPT_MAX_OPTLEN
enable CGROUP_GETSOCKOPT
BPF_CGROUP_RUN_PROG_GETSOCKOPT (-EFAULT)
To resolve this, remove the `BPF_CGROUP_GETSOCKOPT_MAX_OPTLEN` macro and
directly uses `copy_from_sockptr` to ensure that `max_optlen` is always
set before `BPF_CGROUP_RUN_PROG_GETSOCKOPT` is invoked.
Fixes: 0d01da6afc54 ("bpf: implement getsockopt and setsockopt hooks")
Co-developed-by: Yanghui Li <yanghui.li@...iatek.com>
Signed-off-by: Yanghui Li <yanghui.li@...iatek.com>
Co-developed-by: Cheng-Jui Wang <cheng-jui.wang@...iatek.com>
Signed-off-by: Cheng-Jui Wang <cheng-jui.wang@...iatek.com>
Signed-off-by: Tze-nan Wu <Tze-nan.Wu@...iatek.com>
---
Changes from v1 to v2: https://lore.kernel.org/all/20240819082513.27176-1-Tze-nan.Wu@mediatek.com/
Instead of using cgroup_lock in the fastpath, invoke cgroup_bpf_enabled
only once and cache the value in the newly added variable `enabled`.
`BPF_CGROUP_*` macros in do_sock_getsockopt can then both check their
condition with the new variable `enable`, ensuring that either they both
passing the condition or both do not.
Changes from v2 to v3: https://lore.kernel.org/all/20240819155627.1367-1-Tze-nan.Wu@mediatek.com/
Hide cgroup_bpf_enabled in the macro, and some modifications to adapt
the coding style.
Changes from v3 to v4: https://lore.kernel.org/all/20240820092942.16654-1-Tze-nan.Wu@mediatek.com/
Add bpf tag to subject, and add the Fixes tag in body.
Changes from v4 to v5: https://lore.kernel.org/all/20240821093016.2533-1-Tze-nan.Wu@mediatek.com/
Change subject, and fix the race by unconditional `copy_from_sockptr`,
instead of adding a new variable.
---
include/linux/bpf-cgroup.h | 9 ---------
net/socket.c | 4 ++--
2 files changed, 2 insertions(+), 11 deletions(-)
diff --git a/include/linux/bpf-cgroup.h b/include/linux/bpf-cgroup.h
index fb3c3e7181e6..ce91d9b2acb9 100644
--- a/include/linux/bpf-cgroup.h
+++ b/include/linux/bpf-cgroup.h
@@ -390,14 +390,6 @@ static inline bool cgroup_bpf_sock_enabled(struct sock *sk,
__ret; \
})
-#define BPF_CGROUP_GETSOCKOPT_MAX_OPTLEN(optlen) \
-({ \
- int __ret = 0; \
- if (cgroup_bpf_enabled(CGROUP_GETSOCKOPT)) \
- copy_from_sockptr(&__ret, optlen, sizeof(int)); \
- __ret; \
-})
-
#define BPF_CGROUP_RUN_PROG_GETSOCKOPT(sock, level, optname, optval, optlen, \
max_optlen, retval) \
({ \
@@ -518,7 +510,6 @@ static inline int bpf_percpu_cgroup_storage_update(struct bpf_map *map,
#define BPF_CGROUP_RUN_PROG_SOCK_OPS(sock_ops) ({ 0; })
#define BPF_CGROUP_RUN_PROG_DEVICE_CGROUP(atype, major, minor, access) ({ 0; })
#define BPF_CGROUP_RUN_PROG_SYSCTL(head,table,write,buf,count,pos) ({ 0; })
-#define BPF_CGROUP_GETSOCKOPT_MAX_OPTLEN(optlen) ({ 0; })
#define BPF_CGROUP_RUN_PROG_GETSOCKOPT(sock, level, optname, optval, \
optlen, max_optlen, retval) ({ retval; })
#define BPF_CGROUP_RUN_PROG_GETSOCKOPT_KERN(sock, level, optname, optval, \
diff --git a/net/socket.c b/net/socket.c
index fcbdd5bc47ac..0a2bd22ec105 100644
--- a/net/socket.c
+++ b/net/socket.c
@@ -2362,7 +2362,7 @@ INDIRECT_CALLABLE_DECLARE(bool tcp_bpf_bypass_getsockopt(int level,
int do_sock_getsockopt(struct socket *sock, bool compat, int level,
int optname, sockptr_t optval, sockptr_t optlen)
{
- int max_optlen __maybe_unused;
+ int max_optlen __maybe_unused = 0;
const struct proto_ops *ops;
int err;
@@ -2371,7 +2371,7 @@ int do_sock_getsockopt(struct socket *sock, bool compat, int level,
return err;
if (!compat)
- max_optlen = BPF_CGROUP_GETSOCKOPT_MAX_OPTLEN(optlen);
+ copy_from_sockptr(&max_optlen, optlen, sizeof(int));
ops = READ_ONCE(sock->ops);
if (level == SOL_SOCKET) {
--
2.45.2
Powered by blists - more mailing lists