lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 29 Jul 2021 23:17:43 +0200
From:   Johan Almbladh <johan.almbladh@...finetworks.com>
To:     Yonghong Song <yhs@...com>
Cc:     Alexei Starovoitov <ast@...nel.org>,
        Daniel Borkmann <daniel@...earbox.net>,
        Andrii Nakryiko <andrii@...nel.org>,
        Martin KaFai Lau <kafai@...com>,
        Song Liu <songliubraving@...com>,
        John Fastabend <john.fastabend@...il.com>,
        KP Singh <kpsingh@...nel.org>,
        Tony Ambardar <Tony.Ambardar@...il.com>,
        Networking <netdev@...r.kernel.org>, bpf <bpf@...r.kernel.org>
Subject: Re: [PATCH 08/14] bpf/tests: Add tests for ALU operations implemented
 with function calls

On Thu, Jul 29, 2021 at 1:52 AM Yonghong Song <yhs@...com> wrote:
> > +             /*
> > +              * Register (non-)clobbering test, in the case where a 32-bit
> > +              * JIT implements complex ALU64 operations via function calls.
> > +              */
> > +             "INT: Register clobbering, R1 updated",
> > +             .u.insns_int = {
> > +                     BPF_ALU32_IMM(BPF_MOV, R0, 0),
> > +                     BPF_ALU32_IMM(BPF_MOV, R1, 123456789),
> > +                     BPF_ALU32_IMM(BPF_MOV, R2, 2),
> > +                     BPF_ALU32_IMM(BPF_MOV, R3, 3),
> > +                     BPF_ALU32_IMM(BPF_MOV, R4, 4),
> > +                     BPF_ALU32_IMM(BPF_MOV, R5, 5),
> > +                     BPF_ALU32_IMM(BPF_MOV, R6, 6),
> > +                     BPF_ALU32_IMM(BPF_MOV, R7, 7),
> > +                     BPF_ALU32_IMM(BPF_MOV, R8, 8),
> > +                     BPF_ALU32_IMM(BPF_MOV, R9, 9),
> > +                     BPF_ALU64_IMM(BPF_DIV, R1, 123456789),
> > +                     BPF_JMP_IMM(BPF_JNE, R0, 0, 10),
> > +                     BPF_JMP_IMM(BPF_JNE, R1, 1, 9),
> > +                     BPF_JMP_IMM(BPF_JNE, R2, 2, 8),
> > +                     BPF_JMP_IMM(BPF_JNE, R3, 3, 7),
> > +                     BPF_JMP_IMM(BPF_JNE, R4, 4, 6),
> > +                     BPF_JMP_IMM(BPF_JNE, R5, 5, 5),
> > +                     BPF_JMP_IMM(BPF_JNE, R6, 6, 4),
> > +                     BPF_JMP_IMM(BPF_JNE, R7, 7, 3),
> > +                     BPF_JMP_IMM(BPF_JNE, R8, 8, 2),
> > +                     BPF_JMP_IMM(BPF_JNE, R9, 9, 1),
> > +                     BPF_ALU32_IMM(BPF_MOV, R0, 1),
> > +                     BPF_EXIT_INSN(),
> > +             },
> > +             INTERNAL,
> > +             { },
> > +             { { 0, 1 } }
> > +     },
> > +     {
> > +             "INT: Register clobbering, R2 updated",
> > +             .u.insns_int = {
> > +                     BPF_ALU32_IMM(BPF_MOV, R0, 0),
> > +                     BPF_ALU32_IMM(BPF_MOV, R1, 1),
> > +                     BPF_ALU32_IMM(BPF_MOV, R2, 2 * 123456789),
> > +                     BPF_ALU32_IMM(BPF_MOV, R3, 3),
> > +                     BPF_ALU32_IMM(BPF_MOV, R4, 4),
> > +                     BPF_ALU32_IMM(BPF_MOV, R5, 5),
> > +                     BPF_ALU32_IMM(BPF_MOV, R6, 6),
> > +                     BPF_ALU32_IMM(BPF_MOV, R7, 7),
> > +                     BPF_ALU32_IMM(BPF_MOV, R8, 8),
> > +                     BPF_ALU32_IMM(BPF_MOV, R9, 9),
> > +                     BPF_ALU64_IMM(BPF_DIV, R2, 123456789),
> > +                     BPF_JMP_IMM(BPF_JNE, R0, 0, 10),
> > +                     BPF_JMP_IMM(BPF_JNE, R1, 1, 9),
> > +                     BPF_JMP_IMM(BPF_JNE, R2, 2, 8),
> > +                     BPF_JMP_IMM(BPF_JNE, R3, 3, 7),
> > +                     BPF_JMP_IMM(BPF_JNE, R4, 4, 6),
> > +                     BPF_JMP_IMM(BPF_JNE, R5, 5, 5),
> > +                     BPF_JMP_IMM(BPF_JNE, R6, 6, 4),
> > +                     BPF_JMP_IMM(BPF_JNE, R7, 7, 3),
> > +                     BPF_JMP_IMM(BPF_JNE, R8, 8, 2),
> > +                     BPF_JMP_IMM(BPF_JNE, R9, 9, 1),
> > +                     BPF_ALU32_IMM(BPF_MOV, R0, 1),
> > +                     BPF_EXIT_INSN(),
> > +             },
> > +             INTERNAL,
> > +             { },
> > +             { { 0, 1 } }
> > +     },
>
> It looks like the above two tests, "R1 updated" and "R2 updated" should
> be very similar and the only difference is one immediate is 123456789
> and another is 2 * 123456789. But for generated code, they all just have
> the final immediate. Could you explain what the difference in terms of
> jit for the above two tests?

When a BPF_CALL instruction is executed, the eBPF assembler have
already saved any caller-saved registers that must be preserved, put
the arguments in R1-R5, and expects a return value in R0. It is just
for the JIT to emit the call.

Not so when an eBPF instruction is implemented by a function call,
like ALU64 DIV in a 32-bit JIT. In this case, the function call is
unexpected by the eBPF assembler, and must be invisible to it. Now the
JIT must take care of saving all caller-saved registers on stack, put
the operands in the right argument registers, put the return value in
the destination register, and finally restore all caller-saved
registers without overwriting the computed result.

The test checks that all other registers retain their value after such
a hidden function call. However, one register will contain the result.
In order to verify that all registers are saved and restored properly,
we must vary the destination and run it two times. It is not the
result of the operation that its tested, it is absence of possible
side effects.

I can put a more elaborate description in the comment to explain this.

>
> > +     {
> > +             /*
> > +              * Test 32-bit JITs that implement complex ALU64 operations as
> > +              * function calls R0 = f(R1, R2), and must re-arrange operands.
> > +              */
> > +#define NUMER 0xfedcba9876543210ULL
> > +#define DENOM 0x0123456789abcdefULL
> > +             "ALU64_DIV X: Operand register permutations",
> > +             .u.insns_int = {
> > +                     /* R0 / R2 */
> > +                     BPF_LD_IMM64(R0, NUMER),
> > +                     BPF_LD_IMM64(R2, DENOM),
> > +                     BPF_ALU64_REG(BPF_DIV, R0, R2),
> > +                     BPF_JMP_IMM(BPF_JEQ, R0, NUMER / DENOM, 1),
> > +                     BPF_EXIT_INSN(),
> > +                     /* R1 / R0 */
> > +                     BPF_LD_IMM64(R1, NUMER),
> > +                     BPF_LD_IMM64(R0, DENOM),
> > +                     BPF_ALU64_REG(BPF_DIV, R1, R0),
> > +                     BPF_JMP_IMM(BPF_JEQ, R1, NUMER / DENOM, 1),
> > +                     BPF_EXIT_INSN(),
> > +                     /* R0 / R1 */
> > +                     BPF_LD_IMM64(R0, NUMER),
> > +                     BPF_LD_IMM64(R1, DENOM),
> > +                     BPF_ALU64_REG(BPF_DIV, R0, R1),
> > +                     BPF_JMP_IMM(BPF_JEQ, R0, NUMER / DENOM, 1),
> > +                     BPF_EXIT_INSN(),
> > +                     /* R2 / R0 */
> > +                     BPF_LD_IMM64(R2, NUMER),
> > +                     BPF_LD_IMM64(R0, DENOM),
> > +                     BPF_ALU64_REG(BPF_DIV, R2, R0),
> > +                     BPF_JMP_IMM(BPF_JEQ, R2, NUMER / DENOM, 1),
> > +                     BPF_EXIT_INSN(),
> > +                     /* R2 / R1 */
> > +                     BPF_LD_IMM64(R2, NUMER),
> > +                     BPF_LD_IMM64(R1, DENOM),
> > +                     BPF_ALU64_REG(BPF_DIV, R2, R1),
> > +                     BPF_JMP_IMM(BPF_JEQ, R2, NUMER / DENOM, 1),
> > +                     BPF_EXIT_INSN(),
> > +                     /* R1 / R2 */
> > +                     BPF_LD_IMM64(R1, NUMER),
> > +                     BPF_LD_IMM64(R2, DENOM),
> > +                     BPF_ALU64_REG(BPF_DIV, R1, R2),
> > +                     BPF_JMP_IMM(BPF_JEQ, R1, NUMER / DENOM, 1),
> > +                     BPF_EXIT_INSN(),
> > +                     BPF_LD_IMM64(R0, 1),
>
> Do we need this BPF_LD_IMM64(R0, 1)?
> First, if we have it, and next "BPF_ALU64_REG(BPF_DIV, R1, R1)"
> generates incorrect value and exit and then you will get
> exit value 1, which will signal the test success.
>
> Second, if you don't have this R0 = 1, R0 will be DENOM
> and you will be fine.

Good catch! No, it should not be there. Maybe left from previous
debugging, or a copy-and-paste error. I'll remove it.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ