[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f273ab9e-9fc2-7083-71b3-7c493eaf7054@fb.com>
Date: Tue, 19 Dec 2017 11:57:08 -0800
From: Alexei Starovoitov <ast@...com>
To: Edward Cree <ecree@...arflare.com>,
Alexei Starovoitov <ast@...nel.org>,
"David S . Miller" <davem@...emloft.net>
CC: Daniel Borkmann <daniel@...earbox.net>,
Jann Horn <jannh@...gle.com>, <netdev@...r.kernel.org>,
<kernel-team@...com>
Subject: Re: [PATCH bpf 8/9] bpf: fix integer overflows
On 12/19/17 2:29 AM, Edward Cree wrote:
> On 19/12/17 04:12, Alexei Starovoitov wrote:
>> Also reduce the scope of "scalar op scalar" tracking.
> <snip>
>> @@ -2046,6 +2088,12 @@ static int adjust_scalar_min_max_vals(struct bpf_verifier_env *env,
>> src_known = tnum_is_const(src_reg.var_off);
>> dst_known = tnum_is_const(dst_reg->var_off);
>>
>> + if (!src_known &&
>> + opcode != BPF_ADD && opcode != BPF_SUB && opcode != BPF_AND) {
>> + __mark_reg_unknown(dst_reg);
>> + return 0;
>> + }
>> +
>> switch (opcode) {
>> case BPF_ADD:
>> if (signed_add_overflows(dst_reg->smin_value, smin_val) ||
> Still not seeing any explanation for why this change is necessary.
> It also seems arbitrary, e.g. why is AND allowed but not OR?
>
> Hypothetical use case: combining a series of flags based on data read from
> packet into an array index used for storing into a map value. That sounds
> to me like something someone may be legitimately doing.
when someone is trying to fetch run-time variables and OR them together
without final bounds check that is pretty broken.
All of our arrays (packet pointer, map_values, etc) are C style array
that go from zero to N.
Verifier already doing ugly dance with signed/unsigned <,>,<=,>= checks
and fighting llvm optimizations along the way. AND alu to bound access
is enough in most cases and rarely '<' checks are needed.
It is not possible to bound the access with OR, MUL, so no need
to track bits in such ops.
Notice that '!src_known' check above. It means the verifier still tracks
reg |= imm ops, but not reg |= reg.
Moreover this tracking should not have been added to the verifier
in the first place especially with zero tests that cover such situation.
The kernel is not the place to add code 'just in case'.
When the real need arises (highly unlikely for the above hunk)
we can make verifier smarter step by step.
Less tracking also means less state to diverge -> better pruning ->
faster load times.
But the main goal of this hunk is to reduce attack surface.
The bug in RSH handling (patch 1) is an example that the bit tracking
is not easy to do correctly. Especially doing "scalar op scalar"
when scalars are variable registers and not a known constants.
So that is what commit log says:
'... reduce the scope of "scalar op scalar" tracking.'
Powered by blists - more mailing lists