lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <CAADnVQ+6MjSLRq5hFy=kHosoWR=RDOSuU1znCrkcRp-WeD5CMw@mail.gmail.com> Date: Sun, 24 Dec 2023 19:15:42 -0800 From: Alexei Starovoitov <alexei.starovoitov@...il.com> To: Maxim Mikityanskiy <maxtram95@...il.com> Cc: Eduard Zingerman <eddyz87@...il.com>, Alexei Starovoitov <ast@...nel.org>, Daniel Borkmann <daniel@...earbox.net>, Andrii Nakryiko <andrii@...nel.org>, John Fastabend <john.fastabend@...il.com>, Martin KaFai Lau <martin.lau@...ux.dev>, Song Liu <song@...nel.org>, Yonghong Song <yonghong.song@...ux.dev>, KP Singh <kpsingh@...nel.org>, Stanislav Fomichev <sdf@...gle.com>, Hao Luo <haoluo@...gle.com>, Jiri Olsa <jolsa@...nel.org>, Mykola Lysenko <mykolal@...com>, Shuah Khan <shuah@...nel.org>, "David S. Miller" <davem@...emloft.net>, Jakub Kicinski <kuba@...nel.org>, Jesper Dangaard Brouer <hawk@...nel.org>, bpf <bpf@...r.kernel.org>, "open list:KERNEL SELFTEST FRAMEWORK" <linux-kselftest@...r.kernel.org>, Network Development <netdev@...r.kernel.org>, Maxim Mikityanskiy <maxim@...valent.com> Subject: Re: [PATCH bpf-next 08/15] bpf: Assign ID to scalars on spill On Wed, Dec 20, 2023 at 1:40 PM Maxim Mikityanskiy <maxtram95@...il.com> wrote: > > From: Maxim Mikityanskiy <maxim@...valent.com> > > Currently, when a scalar bounded register is spilled to the stack, its > ID is preserved, but only if was already assigned, i.e. if this register > was MOVed before. > > Assign an ID on spill if none is set, so that equal scalars could be > tracked if a register is spilled to the stack and filled into another > register. > > One test is adjusted to reflect the change in register IDs. > > Signed-off-by: Maxim Mikityanskiy <maxim@...valent.com> > --- > kernel/bpf/verifier.c | 8 +++++++- > .../selftests/bpf/progs/verifier_direct_packet_access.c | 2 +- > 2 files changed, 8 insertions(+), 2 deletions(-) > > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c > index b757fdbbbdd2..caa768f1e369 100644 > --- a/kernel/bpf/verifier.c > +++ b/kernel/bpf/verifier.c > @@ -4503,9 +4503,15 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env, > > mark_stack_slot_scratched(env, spi); > if (reg && !(off % BPF_REG_SIZE) && register_is_bounded(reg) && env->bpf_capable) { > + bool reg_value_fits; > + > + reg_value_fits = get_reg_width(reg) <= BITS_PER_BYTE * size; > + /* Make sure that reg had an ID to build a relation on spill. */ > + if (reg_value_fits) > + assign_scalar_id_before_mov(env, reg); Thanks. I just debugged this issue as part of my bpf_cmp series. llvm generated: 1093: (7b) *(u64 *)(r10 -96) = r0 ; R0_w=scalar(smin=smin32=-4095,smax=smax32=256) R10=fp0 fp-96_w=scalar(smin=smin32=-4095,smax=smax32=256) ; if (bpf_cmp(filepart_length, >, MAX_PATH)) 1094: (25) if r0 > 0x100 goto pc+903 ; R0_w=scalar(id=53,smin=smin32=0,smax=umax=smax32=umax32=256,var_off=(0x0; 0x1ff)) the verifier refined the range of 'r0' here, but the code just read spilled value from stack: 1116: (79) r1 = *(u64 *)(r10 -64) ; R1_w=map_value ; payload += filepart_length; 1117: (79) r2 = *(u64 *)(r10 -96) ; R2_w=scalar(smin=smin32=-4095,smax=smax32=256) R10=fp0 fp-96=scalar(smin=smin32=-4095,smax=smax32=256) 1118: (0f) r1 += r2 ; R1_w=map_value(map=data_heap,ks=4,vs=23040,off=148,smin=smin32=-4095,smax=smax32=3344) And later errors as: "R1 min value is negative, either use unsigned index or do a if (index >=0) check." This verifier improvement is certainly necessary. Since you've analyzed this issue did you figure out a workaround for C code on existing and older kernels?
Powered by blists - more mailing lists