[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAEf4Bza3S_HmhHEz34nVDauOB9r09dDW4fZcL26as_hx4XQsWw@mail.gmail.com>
Date: Tue, 9 Jan 2024 15:55:52 -0800
From: Andrii Nakryiko <andrii.nakryiko@...il.com>
To: Maxim Mikityanskiy <maxtram95@...il.com>
Cc: Eduard Zingerman <eddyz87@...il.com>, Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>, Andrii Nakryiko <andrii@...nel.org>,
Shung-Hsi Yu <shung-hsi.yu@...e.com>, John Fastabend <john.fastabend@...il.com>,
Martin KaFai Lau <martin.lau@...ux.dev>, Song Liu <song@...nel.org>,
Yonghong Song <yonghong.song@...ux.dev>, KP Singh <kpsingh@...nel.org>,
Stanislav Fomichev <sdf@...gle.com>, Hao Luo <haoluo@...gle.com>, Jiri Olsa <jolsa@...nel.org>,
Mykola Lysenko <mykolal@...com>, Shuah Khan <shuah@...nel.org>,
"David S. Miller" <davem@...emloft.net>, Jakub Kicinski <kuba@...nel.org>,
Jesper Dangaard Brouer <hawk@...nel.org>, bpf@...r.kernel.org, linux-kselftest@...r.kernel.org,
netdev@...r.kernel.org, Maxim Mikityanskiy <maxim@...valent.com>
Subject: Re: [PATCH bpf-next v2 13/15] selftests/bpf: Add test cases for
narrowing fill
On Mon, Jan 8, 2024 at 12:53 PM Maxim Mikityanskiy <maxtram95@...il.com> wrote:
>
> From: Maxim Mikityanskiy <maxim@...valent.com>
>
> The previous commit allowed to preserve boundaries and track IDs of
> scalars on narrowing fills. Add test cases for that pattern.
>
> Signed-off-by: Maxim Mikityanskiy <maxim@...valent.com>
> Acked-by: Eduard Zingerman <eddyz87@...il.com>
> ---
> .../selftests/bpf/progs/verifier_spill_fill.c | 108 ++++++++++++++++++
> 1 file changed, 108 insertions(+)
>
> diff --git a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
> index fab8ae9fe947..3764111d190d 100644
> --- a/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
> +++ b/tools/testing/selftests/bpf/progs/verifier_spill_fill.c
> @@ -936,4 +936,112 @@ l0_%=: r0 = 0; \
> : __clobber_all);
> }
>
> +SEC("xdp")
> +__description("32-bit fill after 64-bit spill")
> +__success __retval(0)
> +__naked void fill_32bit_after_spill_64bit(void)
I guess these tests are an answer for my question about mixing
spill/fill sizes on earlier patch (so disregard those)
> +{
> + asm volatile(" \
> + /* Randomize the upper 32 bits. */ \
> + call %[bpf_get_prandom_u32]; \
> + r0 <<= 32; \
> + /* 64-bit spill r0 to stack. */ \
> + *(u64*)(r10 - 8) = r0; \
> + /* 32-bit fill r0 from stack. */ \
> + r0 = *(u32*)(r10 - %[offset]); \
have you considered doing the BYTE_ORDER check right here and have
offset embedded in assembly instruction directly:
#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
r0 = *(u32*)(r10 - 8);
#else
r0 = *(u32*)(r10 - 4);
#endif
It's a bit less jumping around the code when reading. And it's kind of
obviously that this is endianness-dependent without jumping to
definition of %[offset]?
> + /* Boundary check on r0 with predetermined result. */\
> + if r0 == 0 goto l0_%=; \
> + /* Dead branch: the verifier should prune it. Do an invalid memory\
> + * access if the verifier follows it. \
> + */ \
> + r0 = *(u64*)(r9 + 0); \
> +l0_%=: exit; \
> +" :
> + : __imm(bpf_get_prandom_u32),
> +#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
> + __imm_const(offset, 8)
> +#else
> + __imm_const(offset, 4)
> +#endif
> + : __clobber_all);
> +}
> +
[...]
Powered by blists - more mailing lists