[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20200330222302.6fhtedyzxfaqmthl@ast-mbp.dhcp.thefacebook.com>
Date: Mon, 30 Mar 2020 15:23:02 -0700
From: Alexei Starovoitov <alexei.starovoitov@...il.com>
To: John Fastabend <john.fastabend@...il.com>
Cc: ecree@...arflare.com, yhs@...com, daniel@...earbox.net,
netdev@...r.kernel.org, bpf@...r.kernel.org
Subject: Re: [bpf-next PATCH v2 2/7] bpf: verifier, do explicit ALU32 bounds
tracking
On Mon, Mar 30, 2020 at 02:36:39PM -0700, John Fastabend wrote:
> +static void __scalar64_min_max_lsh(struct bpf_reg_state *dst_reg,
> + u64 umin_val, u64 umax_val)
> +{
> + /* Special case <<32 because it is a common compiler pattern to zero
> + * upper bits by doing <<32 s>>32. In this case if 32bit bounds are
> + * positive we know this shift will also be positive so we can track
> + * bounds correctly. Otherwise we lose all sign bit information except
> + * what we can pick up from var_off. Perhaps we can generalize this
> + * later to shifts of any length.
> + */
> + if (umin_val == 32 && umax_val == 32 && dst_reg->s32_max_value >= 0)
> + dst_reg->smax_value = (s64)dst_reg->s32_max_value << 32;
> + else
> + dst_reg->smax_value = S64_MAX;
I fixed up above comment to say 'sign extend' instead of 'zero upper bit' and
applied.
Thanks a ton for the awesome work.
Powered by blists - more mailing lists