[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220727060915.2372520-1-kafai@fb.com>
Date: Tue, 26 Jul 2022 23:09:15 -0700
From: Martin KaFai Lau <kafai@...com>
To: <bpf@...r.kernel.org>, <netdev@...r.kernel.org>
CC: Alexei Starovoitov <ast@...nel.org>,
Andrii Nakryiko <andrii@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
David Miller <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>, <kernel-team@...com>,
Paolo Abeni <pabeni@...hat.com>
Subject: [PATCH bpf-next 03/14] bpf: net: Consider optval.is_bpf before capable check in sock_setsockopt()
When bpf program calling bpf_setsockopt(SOL_SOCKET),
it could be run in softirq and doesn't make sense to do the capable
check. There was a similar situation in bpf_setsockopt(TCP_CONGESTION).
In commit 8d650cdedaab ("tcp: fix tcp_set_congestion_control() use from bpf hook")
tcp_set_congestion_control(..., cap_net_admin) was added to skip
the cap check for bpf prog.
A similar change is done in this patch for SO_MARK, SO_PRIORITY,
and SO_BINDTO{DEVICE,IFINDEX} which are the optnames allowed by
bpf_setsockopt(SOL_SOCKET). This will allow the sock_setsockopt()
to be reused by bpf_setsockopt(SOL_SOCKET) in a latter patch.
Signed-off-by: Martin KaFai Lau <kafai@...com>
---
net/core/sock.c | 15 +++++++++------
1 file changed, 9 insertions(+), 6 deletions(-)
diff --git a/net/core/sock.c b/net/core/sock.c
index 61d927a5f6cb..f2c582491d5f 100644
--- a/net/core/sock.c
+++ b/net/core/sock.c
@@ -620,7 +620,7 @@ struct dst_entry *sk_dst_check(struct sock *sk, u32 cookie)
}
EXPORT_SYMBOL(sk_dst_check);
-static int sock_bindtoindex_locked(struct sock *sk, int ifindex)
+static int sock_bindtoindex_locked(struct sock *sk, int ifindex, bool cap_check)
{
int ret = -ENOPROTOOPT;
#ifdef CONFIG_NETDEVICES
@@ -628,7 +628,8 @@ static int sock_bindtoindex_locked(struct sock *sk, int ifindex)
/* Sorry... */
ret = -EPERM;
- if (sk->sk_bound_dev_if && !ns_capable(net->user_ns, CAP_NET_RAW))
+ if (sk->sk_bound_dev_if && cap_check &&
+ !ns_capable(net->user_ns, CAP_NET_RAW))
goto out;
ret = -EINVAL;
@@ -656,7 +657,7 @@ int sock_bindtoindex(struct sock *sk, int ifindex, bool lock_sk)
if (lock_sk)
lock_sock(sk);
- ret = sock_bindtoindex_locked(sk, ifindex);
+ ret = sock_bindtoindex_locked(sk, ifindex, true);
if (lock_sk)
release_sock(sk);
@@ -704,7 +705,7 @@ static int sock_setbindtodevice(struct sock *sk, sockptr_t optval, int optlen)
}
lock_sock_sockopt(sk, optval);
- ret = sock_bindtoindex_locked(sk, index);
+ ret = sock_bindtoindex_locked(sk, index, !optval.is_bpf);
release_sock_sockopt(sk, optval);
out:
#endif
@@ -1166,6 +1167,7 @@ int sock_setsockopt(struct sock *sk, int level, int optname,
case SO_PRIORITY:
if ((val >= 0 && val <= 6) ||
+ optval.is_bpf ||
ns_capable(sock_net(sk)->user_ns, CAP_NET_RAW) ||
ns_capable(sock_net(sk)->user_ns, CAP_NET_ADMIN))
sk->sk_priority = val;
@@ -1312,7 +1314,8 @@ int sock_setsockopt(struct sock *sk, int level, int optname,
clear_bit(SOCK_PASSSEC, &sock->flags);
break;
case SO_MARK:
- if (!ns_capable(sock_net(sk)->user_ns, CAP_NET_RAW) &&
+ if (!optval.is_bpf &&
+ !ns_capable(sock_net(sk)->user_ns, CAP_NET_RAW) &&
!ns_capable(sock_net(sk)->user_ns, CAP_NET_ADMIN)) {
ret = -EPERM;
break;
@@ -1456,7 +1459,7 @@ int sock_setsockopt(struct sock *sk, int level, int optname,
break;
case SO_BINDTOIFINDEX:
- ret = sock_bindtoindex_locked(sk, val);
+ ret = sock_bindtoindex_locked(sk, val, !optval.is_bpf);
break;
case SO_BUF_LOCK:
--
2.30.2
Powered by blists - more mailing lists