lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Tue, 5 May 2020 17:03:19 -0700 From: Luke Nelson <lukenels@...washington.edu> To: bpf@...r.kernel.org Cc: Luke Nelson <luke.r.nels@...il.com>, Xi Wang <xi.wang@...il.com>, Björn Töpel <bjorn.topel@...il.com>, Paul Walmsley <paul.walmsley@...ive.com>, Palmer Dabbelt <palmer@...belt.com>, Albert Ou <aou@...s.berkeley.edu>, Alexei Starovoitov <ast@...nel.org>, Daniel Borkmann <daniel@...earbox.net>, Martin KaFai Lau <kafai@...com>, Song Liu <songliubraving@...com>, Yonghong Song <yhs@...com>, Andrii Nakryiko <andriin@...com>, John Fastabend <john.fastabend@...il.com>, KP Singh <kpsingh@...omium.org>, netdev@...r.kernel.org, linux-riscv@...ts.infradead.org, linux-kernel@...r.kernel.org Subject: [PATCH bpf-next 3/4] bpf, riscv: Optimize BPF_JMP BPF_K when imm == 0 on RV64 This patch adds an optimization to BPF_JMP (32- and 64-bit) BPF_K for when the BPF immediate is zero. When the immediate is zero, the code can directly use the RISC-V zero register instead of loading a zero immediate to a temporary register first. Co-developed-by: Xi Wang <xi.wang@...il.com> Signed-off-by: Xi Wang <xi.wang@...il.com> Signed-off-by: Luke Nelson <luke.r.nels@...il.com> --- arch/riscv/net/bpf_jit_comp64.c | 15 ++++++++++----- 1 file changed, 10 insertions(+), 5 deletions(-) diff --git a/arch/riscv/net/bpf_jit_comp64.c b/arch/riscv/net/bpf_jit_comp64.c index c3ce9a911b66..b07cef952019 100644 --- a/arch/riscv/net/bpf_jit_comp64.c +++ b/arch/riscv/net/bpf_jit_comp64.c @@ -796,7 +796,13 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, case BPF_JMP32 | BPF_JSET | BPF_K: rvoff = rv_offset(i, off, ctx); s = ctx->ninsns; - emit_imm(RV_REG_T1, imm, ctx); + if (imm) { + emit_imm(RV_REG_T1, imm, ctx); + rs = RV_REG_T1; + } else { + /* If imm is 0, simply use zero register. */ + rs = RV_REG_ZERO; + } if (!is64) { if (is_signed_bpf_cond(BPF_OP(code))) emit_sext_32_rd(&rd, ctx); @@ -811,11 +817,10 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, if (BPF_OP(code) == BPF_JSET) { /* Adjust for and */ rvoff -= 4; - emit(rv_and(RV_REG_T1, rd, RV_REG_T1), ctx); - emit_branch(BPF_JNE, RV_REG_T1, RV_REG_ZERO, rvoff, - ctx); + emit(rv_and(rs, rd, rs), ctx); + emit_branch(BPF_JNE, rs, RV_REG_ZERO, rvoff, ctx); } else { - emit_branch(BPF_OP(code), rd, RV_REG_T1, rvoff, ctx); + emit_branch(BPF_OP(code), rd, rs, rvoff, ctx); } break; -- 2.17.1
Powered by blists - more mailing lists