lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Mon, 13 May 2019 02:01:50 +0200
From:   Daniel Borkmann <daniel@...earbox.net>
To:     Krzesimir Nowak <krzesimir@...volk.io>
Cc:     bpf@...r.kernel.org, Alban Crequy <alban@...volk.io>,
        Iago López Galeiras <iago@...volk.io>,
        Yonghong Song <yhs@...com>,
        Alexei Starovoitov <ast@...nel.org>,
        Martin KaFai Lau <kafai@...com>,
        Song Liu <songliubraving@...com>, netdev@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH bpf v1] bpf: Fix undefined behavior in narrow load
 handling

On 05/10/2019 12:16 PM, Krzesimir Nowak wrote:
> On Thu, May 9, 2019 at 11:30 PM Daniel Borkmann <daniel@...earbox.net> wrote:
>> On 05/08/2019 06:08 PM, Krzesimir Nowak wrote:
>>> Commit 31fd85816dbe ("bpf: permits narrower load from bpf program
>>> context fields") made the verifier add AND instructions to clear the
>>> unwanted bits with a mask when doing a narrow load. The mask is
>>> computed with
>>>
>>> (1 << size * 8) - 1
>>>
>>> where "size" is the size of the narrow load. When doing a 4 byte load
>>> of a an 8 byte field the verifier shifts the literal 1 by 32 places to
>>> the left. This results in an overflow of a signed integer, which is an
>>> undefined behavior. Typically the computed mask was zero, so the
>>> result of the narrow load ended up being zero too.
>>>
>>> Cast the literal to long long to avoid overflows. Note that narrow
>>> load of the 4 byte fields does not have the undefined behavior,
>>> because the load size can only be either 1 or 2 bytes, so shifting 1
>>> by 8 or 16 places will not overflow it. And reading 4 bytes would not
>>> be a narrow load of a 4 bytes field.
>>>
>>> Reviewed-by: Alban Crequy <alban@...volk.io>
>>> Reviewed-by: Iago López Galeiras <iago@...volk.io>
>>> Fixes: 31fd85816dbe ("bpf: permits narrower load from bpf program context fields")
>>> Cc: Yonghong Song <yhs@...com>
>>> Signed-off-by: Krzesimir Nowak <krzesimir@...volk.io>
>>> ---
>>>  kernel/bpf/verifier.c | 2 +-
>>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
>>> index 09d5d972c9ff..950fac024fbb 100644
>>> --- a/kernel/bpf/verifier.c
>>> +++ b/kernel/bpf/verifier.c
>>> @@ -7296,7 +7296,7 @@ static int convert_ctx_accesses(struct bpf_verifier_env *env)
>>>                                                                       insn->dst_reg,
>>>                                                                       shift);
>>>                               insn_buf[cnt++] = BPF_ALU64_IMM(BPF_AND, insn->dst_reg,
>>> -                                                             (1 << size * 8) - 1);
>>> +                                                             (1ULL << size * 8) - 1);
>>>                       }
>>
>> Makes sense, good catch & thanks for the fix!
>>
>> Could you also add a test case to test_verifier.c so we keep track of this?
>>
>> Thanks,
>> Daniel
> 
> Hi,
> 
> A test for it is a bit tricky. I only found two 64bit fields that can
> be loaded narrowly - `sample_period` and `addr` in `struct
> bpf_perf_event_data`, so in theory I could have a test like follows:
> 
> {
>     "32bit loads of a 64bit field (both least and most significant words)",
>     .insns = {
>     BPF_LDX_MEM(BPF_W, BPF_REG_4, BPF_REG_1, offsetof(struct
> bpf_perf_event_data, addr)),
>     BPF_LDX_MEM(BPF_W, BPF_REG_4, BPF_REG_1, offsetof(struct
> bpf_perf_event_data, addr) + 4),
>     BPF_MOV64_IMM(BPF_REG_0, 0),
>     BPF_EXIT_INSN(),
>     },
>     .result = ACCEPT,
>     .prog_type = BPF_PROG_TYPE_PERF_EVENT,
> },
> 
> The test like this would check that the program is not rejected, but
> it wasn't an issue. The test does not check if the verifier has
> transformed the narrow reads properly. Ideally the BPF program would
> do something like this:
> 
> /* let's assume that low and high variables get their values from narrow load */
> __u64 low = (__u32)perf_event->addr;
> __u64 high = (__u32)(perf_event->addr >> 32);
> __u64 addr = low | (high << 32);
> 
> return addr != perf_event->addr;
> 
> But the test_verifier.c won't be able to run this, because
> BPF_PROG_TYPE_PERF_EVENT programs are not supported by the
> bpf_test_run_prog function.
> 
> Any hints how to proceed here?

The test_verifier actually also runs the programs after successful verification,
so above C-like snippet should be converted to BPF asm. Search for ".retval" in
some of the test cases. (I've for now applied the fix itself to bpf, but still
expect such test case as follow-up for same tree. Thanks!)

> Cheers,
> Krzesimir
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ