lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190302161010.2478707-1-kafai@fb.com>
Date:   Sat, 2 Mar 2019 08:10:10 -0800
From:   Martin KaFai Lau <kafai@...com>
To:     <netdev@...r.kernel.org>
CC:     Alexei Starovoitov <ast@...com>,
        Daniel Borkmann <daniel@...earbox.net>, <kernel-team@...com>,
        Lorenz Bauer <lmb@...udflare.com>
Subject: [PATCH v3 bpf-next 1/2] bpf: Fix bpf_tcp_sock and bpf_sk_fullsock issue related to bpf_sk_release

Lorenz Bauer [thanks!] reported that a ptr returned by bpf_tcp_sock(sk)
can still be accessed after bpf_sk_release(sk).
Both bpf_tcp_sock() and bpf_sk_fullsock() have the same issue.
This patch addresses them together.

A simple reproducer looks like this:

sk = bpf_sk_lookup_tcp();
/* if (!sk) ... */
tp = bpf_tcp_sock(sk);
/* if (!tp) ... */
bpf_sk_release(sk);
snd_cwnd = tp->snd_cwnd; /* oops! The verifier does not complain. */

The problem is the verifier did not scrub the register's states of
the tcp_sock ptr (tp) after bpf_sk_release(sk).

[ Note that when calling bpf_tcp_sock(sk), the sk is not always
  refcount-acquired. e.g. bpf_tcp_sock(skb->sk). The verifier works
  fine for this case. ]

Currently, the verifier does not track if a helper's return ptr (in REG_0)
is "carry"-ing one of its argument's refcount status. To carry this info,
the reg1->id needs to be stored in reg0.  The reg0->id has already
been used for NULL checking purpose.  Hence, a new "refcount_id"
is needed in "struct bpf_reg_state".

With refcount_id, when bpf_sk_release(sk) is called, the verifier can scrub
all reg states which has a refcount_id match.  It is done with the changes
in release_reg_references().

When acquiring and releasing a refcount, the reg->id is still used.
Hence, we cannot do "bpf_sk_release(tp)" in the above reproducer
example.

Misc change notes:
- With the new refcount_id, reg_is_refcounted() test can now be
  done with "reg->refcount_id && reg->id == reg->refcount_id" instead of
  testing the ptr type.

  The type_is_refcounted() and type_is_refcounted_or_null()
  are no longer needed, so removed.

- An anonymous struct is added to bpf_call_arg_meta to store
  the reg->id and reg->refcount_id of the arg.  Otherwise,
  they will be unavailable after check_helper_call()
  has cleared all CALLER_SAVED_REGS.

- The check_func_arg() can only allow one refcount-ed arg.  It is
  guaranteed by check_refcount_ok() which ensures at most one arg can be
  refcount-ed.  Hence, it is a verifier internal error if >1 refcount arg
  found in check_func_arg().

- The check_func_arg() also complains if a "is_acquire_function(func_id)"
  helper is having a refcount-ed arg.  No func_id is doing this now
  and should have been rejected earlier, so it is treated as
  verifier internal error also.

- In check_func_arg(), the "!reg->id" check is removed under
  the ARG_PTR_TO_SOCKET case.  It is because a PTR_TO_SOCKET
  can be obtained from bpf_sk_fullsock() and it does not
  take a refcount.  The verifier will still complain during
  release_reference() but it does not treat it as a
  verifier internal error anymore.

- In release_reference(), release_reference_state() is called
  first to ensure a match on "reg->id" can be found before
  scrubbing the reg states with release_reg_references().

Fixes: 655a51e536c0 ("bpf: Add struct bpf_tcp_sock and BPF_FUNC_tcp_sock")
Cc: Lorenz Bauer <lmb@...udflare.com>
Reported-by: Lorenz Bauer <lmb@...udflare.com>
Signed-off-by: Martin KaFai Lau <kafai@...com>
---
 include/linux/bpf_verifier.h |   9 +++
 kernel/bpf/verifier.c        | 107 ++++++++++++++++++++++-------------
 2 files changed, 77 insertions(+), 39 deletions(-)

diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
index 69f7a3449eda..b7698d0534cb 100644
--- a/include/linux/bpf_verifier.h
+++ b/include/linux/bpf_verifier.h
@@ -66,6 +66,15 @@ struct bpf_reg_state {
 	 * same reference to the socket, to determine proper reference freeing.
 	 */
 	u32 id;
+	/* For PTR_TO_SOCKET and PTR_TO_TCP_SOCK, this ptr may not actually
+	 * hold a refcount of a socket but instead it is a ptr
+	 * returned from a helper which is based on its refcount-ed
+	 * ptr argument (e.g. bpf_tcp_sock()).
+	 * "refcount_id" stores which refcount-ed argument it
+	 * originally derived from.  When this original argument's
+	 * refcount is released, this ptr will also be invalidated.
+	 */
+	u32 refcount_id;
 	/* For scalar types (SCALAR_VALUE), this represents our knowledge of
 	 * the actual value.
 	 * For pointer types, this represents the variable part of the offset
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 1b9496c41383..31c278fadaaa 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -212,7 +212,10 @@ struct bpf_call_arg_meta {
 	int access_size;
 	s64 msize_smax_value;
 	u64 msize_umax_value;
-	int ptr_id;
+	struct {
+		int id;
+		int refcount_id;
+	} refcount_reg;
 	int func_id;
 };
 
@@ -346,19 +349,9 @@ static bool reg_type_may_be_null(enum bpf_reg_type type)
 	       type == PTR_TO_TCP_SOCK_OR_NULL;
 }
 
-static bool type_is_refcounted(enum bpf_reg_type type)
-{
-	return type == PTR_TO_SOCKET;
-}
-
-static bool type_is_refcounted_or_null(enum bpf_reg_type type)
-{
-	return type == PTR_TO_SOCKET || type == PTR_TO_SOCKET_OR_NULL;
-}
-
 static bool reg_is_refcounted(const struct bpf_reg_state *reg)
 {
-	return type_is_refcounted(reg->type);
+	return reg->refcount_id && reg->id == reg->refcount_id;
 }
 
 static bool reg_may_point_to_spin_lock(const struct bpf_reg_state *reg)
@@ -367,14 +360,10 @@ static bool reg_may_point_to_spin_lock(const struct bpf_reg_state *reg)
 		map_value_has_spin_lock(reg->map_ptr);
 }
 
-static bool reg_is_refcounted_or_null(const struct bpf_reg_state *reg)
-{
-	return type_is_refcounted_or_null(reg->type);
-}
-
 static bool arg_type_is_refcounted(enum bpf_arg_type type)
 {
-	return type == ARG_PTR_TO_SOCKET;
+	return type == ARG_PTR_TO_SOCKET ||
+		type == ARG_PTR_TO_SOCK_COMMON;
 }
 
 /* Determine whether the function releases some resources allocated by another
@@ -392,6 +381,12 @@ static bool is_acquire_function(enum bpf_func_id func_id)
 		func_id == BPF_FUNC_sk_lookup_udp;
 }
 
+static bool is_refcount_carrying_function(enum bpf_func_id func_id)
+{
+	return func_id == BPF_FUNC_tcp_sock ||
+		func_id == BPF_FUNC_sk_fullsock;
+}
+
 /* string representation of 'enum bpf_reg_type' */
 static const char * const reg_type_str[] = {
 	[NOT_INIT]		= "?",
@@ -465,7 +460,8 @@ static void print_verifier_state(struct bpf_verifier_env *env,
 			if (t == PTR_TO_STACK)
 				verbose(env, ",call_%d", func(env, reg)->callsite);
 		} else {
-			verbose(env, "(id=%d", reg->id);
+			verbose(env, "(id=%d refcount_id=%d", reg->id,
+				reg->refcount_id);
 			if (t != SCALAR_VALUE)
 				verbose(env, ",off=%d", reg->off);
 			if (type_is_pkt_pointer(t))
@@ -2418,12 +2414,6 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 regno,
 		expected_type = PTR_TO_SOCKET;
 		if (type != expected_type)
 			goto err_type;
-		if (meta->ptr_id || !reg->id) {
-			verbose(env, "verifier internal error: mismatched references meta=%d, reg=%d\n",
-				meta->ptr_id, reg->id);
-			return -EFAULT;
-		}
-		meta->ptr_id = reg->id;
 	} else if (arg_type == ARG_PTR_TO_SPIN_LOCK) {
 		if (meta->func_id == BPF_FUNC_spin_lock) {
 			if (process_spin_lock(env, regno, true))
@@ -2532,6 +2522,26 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 regno,
 					      zero_size_allowed, meta);
 	}
 
+	if (reg->refcount_id) {
+		if (meta->refcount_reg.refcount_id) {
+			verbose(env, "verifier internal error: more than one arg with refcount_id R%d %u %u\n",
+				regno, reg->refcount_id,
+				meta->refcount_reg.refcount_id);
+			return -EFAULT;
+		}
+
+		if (is_acquire_function(meta->func_id)) {
+			verbose(env,
+				"verifier internal error: func %s#%d taking an already refcount-ed arg R%d\n",
+				func_id_name(meta->func_id), meta->func_id,
+				regno);
+			return -EFAULT;
+		}
+
+		meta->refcount_reg.id = reg->id;
+		meta->refcount_reg.refcount_id = reg->refcount_id;
+	}
+
 	return err;
 err_type:
 	verbose(env, "R%d type=%s expected=%s\n", regno,
@@ -2805,13 +2815,13 @@ static void release_reg_references(struct bpf_verifier_env *env,
 	int i;
 
 	for (i = 0; i < MAX_BPF_REG; i++)
-		if (regs[i].id == id)
+		if (regs[i].refcount_id == id)
 			mark_reg_unknown(env, regs, i);
 
 	bpf_for_each_spilled_reg(i, state, reg) {
 		if (!reg)
 			continue;
-		if (reg_is_refcounted(reg) && reg->id == id)
+		if (reg->refcount_id == id)
 			__mark_reg_unknown(reg);
 	}
 }
@@ -2819,16 +2829,20 @@ static void release_reg_references(struct bpf_verifier_env *env,
 /* The pointer with the specified id has released its reference to kernel
  * resources. Identify all copies of the same pointer and clear the reference.
  */
-static int release_reference(struct bpf_verifier_env *env,
-			     struct bpf_call_arg_meta *meta)
+static int release_reference(struct bpf_verifier_env *env, int id)
 {
 	struct bpf_verifier_state *vstate = env->cur_state;
+	int err;
 	int i;
 
+	err = release_reference_state(cur_func(env), id);
+	if (err)
+		return err;
+
 	for (i = 0; i <= vstate->curframe; i++)
-		release_reg_references(env, vstate->frame[i], meta->ptr_id);
+		release_reg_references(env, vstate->frame[i], id);
 
-	return release_reference_state(cur_func(env), meta->ptr_id);
+	return 0;
 }
 
 static int check_func_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
@@ -3093,7 +3107,7 @@ static int check_helper_call(struct bpf_verifier_env *env, int func_id, int insn
 			return err;
 		}
 	} else if (is_release_function(func_id)) {
-		err = release_reference(env, &meta);
+		err = release_reference(env, meta.refcount_reg.id);
 		if (err) {
 			verbose(env, "func %s#%d reference has not been acquired before\n",
 				func_id_name(func_id), func_id);
@@ -3156,6 +3170,7 @@ static int check_helper_call(struct bpf_verifier_env *env, int func_id, int insn
 				return id;
 			/* For release_reference() */
 			regs[BPF_REG_0].id = id;
+			regs[BPF_REG_0].refcount_id = id;
 		} else {
 			/* For mark_ptr_or_null_reg() */
 			regs[BPF_REG_0].id = ++env->id_gen;
@@ -3170,6 +3185,9 @@ static int check_helper_call(struct bpf_verifier_env *env, int func_id, int insn
 		return -EINVAL;
 	}
 
+	if (is_refcount_carrying_function(func_id))
+		regs[BPF_REG_0].refcount_id = meta.refcount_reg.refcount_id;
+
 	do_refine_retval_range(regs, fn->ret_type, func_id, &meta);
 
 	err = check_map_func_compatibility(env, meta.map_ptr, func_id);
@@ -4665,11 +4683,22 @@ static void mark_ptr_or_null_reg(struct bpf_func_state *state,
 		} else if (reg->type == PTR_TO_TCP_SOCK_OR_NULL) {
 			reg->type = PTR_TO_TCP_SOCK;
 		}
-		if (is_null || !(reg_is_refcounted(reg) ||
-				 reg_may_point_to_spin_lock(reg))) {
-			/* We don't need id from this point onwards anymore,
-			 * thus we should better reset it, so that state
-			 * pruning has chances to take effect.
+		if (is_null) {
+			/* We don't need id and refcount_id from this point
+			 * onwards anymore, thus we should better reset it,
+			 * so that state pruning has chances to take effect.
+			 */
+			reg->id = 0;
+			reg->refcount_id = 0;
+		} else if (!reg_is_refcounted(reg) &&
+			   !reg_may_point_to_spin_lock(reg)) {
+			/* For not-NULL ptr, reg->refcount_id will be reset
+			 * in release_reg_references().
+			 *
+			 * reg->id is still used by refcounted ptr
+			 * and spin_lock ptr for tracking purpose.
+			 * Other than these two ptr type,
+			 * reg->id can also be reset.
 			 */
 			reg->id = 0;
 		}
@@ -4687,8 +4716,8 @@ static void mark_ptr_or_null_regs(struct bpf_verifier_state *vstate, u32 regno,
 	u32 id = regs[regno].id;
 	int i, j;
 
-	if (reg_is_refcounted_or_null(&regs[regno]) && is_null)
-		release_reference_state(state, id);
+	if (reg_is_refcounted(&regs[regno]) && is_null)
+		WARN_ON_ONCE(release_reference_state(state, id));
 
 	for (i = 0; i < MAX_BPF_REG; i++)
 		mark_ptr_or_null_reg(state, &regs[i], id, is_null);
-- 
2.17.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ