lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAG48ez0BASfbVtM9POTfBTKedCyvD1ESvcgM8jrsca30BMkstA@mail.gmail.com>
Date:   Mon, 8 May 2017 00:54:18 +0200
From:   Jann Horn <jannh@...gle.com>
To:     Daniel Borkmann <daniel@...earbox.net>
Cc:     "David S. Miller" <davem@...emloft.net>,
        Alexei Starovoitov <alexei.starovoitov@...il.com>,
        kafai@...com, netdev@...r.kernel.org
Subject: Re: [PATCH net] bpf: don't let ldimm64 leak map addresses on unprivileged

On Mon, May 8, 2017 at 12:51 AM, Daniel Borkmann <daniel@...earbox.net> wrote:
> On 05/08/2017 12:26 AM, Jann Horn wrote:
>>
>> On Mon, May 8, 2017 at 12:04 AM, Daniel Borkmann <daniel@...earbox.net>
>> wrote:
>>>
>>> The patch fixes two things at once:
>>>
>>> 1) It checks the env->allow_ptr_leaks and only prints the map address to
>>>     the log if we have the privileges to do so, otherwise it just dumps 0
>>>     as we would when kptr_restrict is enabled on %pK. Given the latter is
>>>     off by default and not every distro sets it, I don't want to rely on
>>>     this, hence the 0 by default for unprivileged.
>>>
>>> 2) Printing of ldimm64 in the verifier log is currently broken in that
>>>     we don't print the full immediate, but only the 32 bit part of the
>>>     first insn part for ldimm64. Thus, fix this up as well; it's okay to
>>>     access, since we verified all ldimm64 earlier already (including just
>>>     constants) through replace_map_fd_with_map_ptr().
>>>
>>> Fixes: cbd357008604 ("bpf: verifier (add ability to receive verification
>>> log)")
>>> Reported-by: Jann Horn <jannh@...gle.com>
>>> Signed-off-by: Daniel Borkmann <daniel@...earbox.net>
>>
>> [...]
>>>
>>> @@ -362,9 +363,19 @@ static void print_bpf_insn(struct bpf_insn *insn)
>>>                                  insn->code,
>>>                                  bpf_ldst_string[BPF_SIZE(insn->code) >>
>>> 3],
>>>                                  insn->src_reg, insn->imm);
>>> -               } else if (BPF_MODE(insn->code) == BPF_IMM) {
>>> -                       verbose("(%02x) r%d = 0x%x\n",
>>> -                               insn->code, insn->dst_reg, insn->imm);
>>> +               } else if (BPF_MODE(insn->code) == BPF_IMM &&
>>> +                          BPF_SIZE(insn->code) == BPF_DW) {
>>> +                       /* At this point, we already made sure that the
>>> second
>>> +                        * part of the ldimm64 insn is accessible.
>>> +                        */
>>> +                       u64 imm = ((u64)(insn + 1)->imm << 32) |
>>> (u32)insn->imm;
>>> +                       bool map_ptr = insn->src_reg ==
>>> BPF_PSEUDO_MAP_FD;
>>> +
>>> +                       if (map_ptr && !env->allow_ptr_leaks)
>>> +                               imm = 0;
>>> +
>>> +                       verbose("(%02x) r%d = 0x%llx\n", insn->code,
>>> +                               insn->dst_reg, (unsigned long long)imm);
>>>                  } else {
>>>                          verbose("BUG_ld_%02x\n", insn->code);
>>>                          return;
>>
>>
>> You replaced the `BPF_MODE(insn->code) == BPF_IMM` branch with a
>> `BPF_MODE(insn->code) == BPF_IMM && BPF_SIZE(insn->code) == BPF_DW`
>> branch. Doesn't that break printing normal immediates?
>
>
> What do you mean by 'normal' immediates? You mean loads of imm into
> register, right? The ldimm64 is kind of special treated; for imms
> fitting into 32 bit, there is BPF_MOV64_IMM() and BPF_MOV32_IMM()
> otherwise.
>
> F.e. see the jumptable in __bpf_prog_run(), which is the interpreter.
> All BPF_LD instructions that we have are:
>
> static const void *jumptable[256] = {
>   [...]
>   [BPF_LD | BPF_ABS | BPF_W] = &&LD_ABS_W,
>   [BPF_LD | BPF_ABS | BPF_H] = &&LD_ABS_H,
>   [BPF_LD | BPF_ABS | BPF_B] = &&LD_ABS_B,
>   [BPF_LD | BPF_IND | BPF_W] = &&LD_IND_W,
>   [BPF_LD | BPF_IND | BPF_H] = &&LD_IND_H,
>   [BPF_LD | BPF_IND | BPF_B] = &&LD_IND_B,
>   [BPF_LD | BPF_IMM | BPF_DW] = &&LD_IMM_DW,
> };
>
> In the print_bpf_insn() under class == BPF_LD, the BPF_ABS and BPF_IND
> are separately handled (load of packet data from skb), and the BPF_IMM
> is the one we're fixing, which only has BPF_DW as an option. I added it
> there since we really only want to see BPF_DW in this branch due to the
> double imm access.

Ah, right, I missed that. Nevermind.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ