lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190710100248.GA32281@harukaze>
Date:   Wed, 10 Jul 2019 12:02:48 +0200
From:   Paolo Pisati <p.pisati@...il.com>
To:     Yonghong Song <yhs@...com>
Cc:     Paolo Pisati <p.pisati@...il.com>,
        Daniel Borkmann <daniel@...earbox.net>,
        Alexei Starovoitov <ast@...nel.org>, Martin Lau <kafai@...com>,
        Song Liu <songliubraving@...com>,
        "bpf@...r.kernel.org" <bpf@...r.kernel.org>, netdev@...r.kernel.org
Subject: Re: [RESEND] test_verifier #13 fails on arm64: "retval 65507 != -29"

On Mon, Jul 01, 2019 at 09:51:25PM +0000, Yonghong Song wrote:
> 
> Below is the test case.
> {
>          "valid read map access into a read-only array 2",
>          .insns = {
>          BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
>          BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
>          BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
>          BPF_LD_MAP_FD(BPF_REG_1, 0),
>          BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, 
> BPF_FUNC_map_lookup_elem),
>          BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
> 
>          BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
>          BPF_MOV64_IMM(BPF_REG_2, 4),
>          BPF_MOV64_IMM(BPF_REG_3, 0),
>          BPF_MOV64_IMM(BPF_REG_4, 0),
>          BPF_MOV64_IMM(BPF_REG_5, 0),
>          BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
>                       BPF_FUNC_csum_diff),
>          BPF_EXIT_INSN(),
>          },
>          .prog_type = BPF_PROG_TYPE_SCHED_CLS,
>          .fixup_map_array_ro = { 3 },
>          .result = ACCEPT,
>          .retval = -29,
> },
> 
> The issue may be with helper bpf_csum_diff().
> Maybe you can check bpf_csum_diff() helper return value
> to confirm and take a further look at bpf_csum_diff implementations
> between x64 and amd64.

Indeed, the different result comes from csum_partial() or, more precisely,
do_csum().

x86-64 uses an asm optimized version residing in arch/x86/lib/csum-partial_64.c,
while the generic version is in lib/checksum.c.

I replaced the x86-64 csum_partial() / do_csum() code, with the one in
lib/checksum.c and by doing so i reproduced the same error on x86-64 (thus, it's
not an arch dependent issue).

I added some debugging to bpf_csum_diff(), and here are the results with different
checksum implementation code:

http://paste.debian.net/1091037/

lib/checksum.c:
...
[  206.084537] ____bpf_csum_diff from_size: 1 to_size: 0
[  206.085274] ____bpf_csum_diff from[0]: 28
[  206.085276] ____bpf_csum_diff diff[0]: 4294967267
[  206.085277] ____bpf_csum_diff diff_size: 4 seed: 0

After csum_partial() call:

[  206.086059] ____bpf_csum_diff csum: 65507 - 0xffe3

arch/x86/lib/csum-partial_64.c
...
[   40.467308] ____bpf_csum_diff from_size: 1 to_size: 0
[   40.468141] ____bpf_csum_diff from[0]: 28
[   40.468143] ____bpf_csum_diff diff[0]: 4294967267
[   40.468144] ____bpf_csum_diff diff_size: 4 seed: 0

After csum_partial() call:

[   40.468937] ____bpf_csum_diff csum: -29 - 0xffffffe3

One thing that i noticed, x86-64 csum-partial_64.c::do_csum() doesn't reduce the
calculated checksum to 16bit before returning it (unless the input value is
odd - *):

arch/x86/lib/csum-partial_64.c::do_csum()
		...
        if (unlikely(odd)) { 
                result = from32to16(result);
                result = ((result >> 8) & 0xff) | ((result & 0xff) << 8);
        }
        return result;
}

contrary to all the other do_csum() implementations (that i could understand):

lib/checksum.c::do_csum()
arch/alpha/lib/checksum.c::do_csum()
arch/parisc/lib/checksum.c::do_csum()

Apparently even ia64 does the folding (arch/ia64/lib/do_csum.S see a comment right
before .do_csum_exit:), and arch/c6x/lib/csum_64plus.S too (see
arch/c6x/lib/csum_64plus.S).

Funnily enough, if i change do_csum() for x86-64, folding the
checksum to 16 bit (following all the other implementations):

--- a/arch/x86/lib/csum-partial_64.c
+++ b/arch/x86/lib/csum-partial_64.c
@@ -112,8 +112,8 @@ static unsigned do_csum(const unsigned char *buff, unsigned
len)
        if (len & 1)
                result += *buff;
        result = add32_with_carry(result>>32, result & 0xffffffff); 
+       result = from32to16(result);
        if (unlikely(odd)) { 
-               result = from32to16(result);
                result = ((result >> 8) & 0xff) | ((result & 0xff) << 8);
        }
        return result;

then, the x86-64 result match the others: 65507 or 0xffe3.

As a last attempt, i tried running the bpf test_verifier on an armhf platform,
and i got a completely different number:

[   57.667999] ____bpf_csum_diff from_size: 1 to_size: 0
[   57.668016] ____bpf_csum_diff from[0]: 28
[   57.668028] ____bpf_csum_diff diff[0]: 4294967267
[   57.668039] ____bpf_csum_diff diff_size: 4 seed: 0

After csum_partial() call:

[   57.668052] ____bpf_csum_diff::2002 csum: 131042 - 0x0001ffe2

Not sure what to make of these number, but i have a question: whats is the
correct checksum of the memory chunk passed to csum_partial()? Is it really -29?

Because, at least 2 other implementations i tested (the arm assembly code and
the c implementation in lib/checksum.c) computes a different value, so either
there's a bug in checksum calcution (2 out of 3???), or we are interpreting the
returned value from csum_partial() somehow wrongly.

*: originally, the x86-64 did the 16bit folding, but the logic was changed to
what we have today during a big rewrite - search for:

commit 3ef076bb685a461bbaff37a1f06010fc4d7ce733
Author: Andi Kleen <ak@...e.de>
Date:   Fri Jun 13 04:27:34 2003 -0700

    [PATCH] x86-64 merge

in this historic repo:

https://git.kernel.org/pub/scm/linux/kernel/git/tglx/history.git
-- 
bye,
p.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ