[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YtZsgMl64mWbDZUG@worktop.programming.kicks-ass.net>
Date: Tue, 19 Jul 2022 10:34:08 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Alexei Starovoitov <alexei.starovoitov@...il.com>
Cc: Thomas Gleixner <tglx@...utronix.de>,
LKML <linux-kernel@...r.kernel.org>, X86 ML <x86@...nel.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Tim Chen <tim.c.chen@...ux.intel.com>,
Josh Poimboeuf <jpoimboe@...nel.org>,
Andrew Cooper <Andrew.Cooper3@...rix.com>,
Pawan Gupta <pawan.kumar.gupta@...ux.intel.com>,
Johannes Wikner <kwikner@...z.ch>,
Alyssa Milburn <alyssa.milburn@...ux.intel.com>,
Jann Horn <jannh@...gle.com>, "H.J. Lu" <hjl.tools@...il.com>,
Joao Moreira <joao.moreira@...el.com>,
Joseph Nuzman <joseph.nuzman@...el.com>,
Steven Rostedt <rostedt@...dmis.org>,
Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>
Subject: Re: [patch 37/38] x86/bpf: Emit call depth accounting if required
On Mon, Jul 18, 2022 at 10:30:01PM -0700, Alexei Starovoitov wrote:
> On Sat, Jul 16, 2022 at 4:18 PM Thomas Gleixner <tglx@...utronix.de> wrote:
> > @@ -1431,19 +1437,26 @@ st: if (is_imm8(insn->off))
> > break;
> >
> > /* call */
> > - case BPF_JMP | BPF_CALL:
> > + case BPF_JMP | BPF_CALL: {
> > + int offs;
> > +
> > func = (u8 *) __bpf_call_base + imm32;
> > if (tail_call_reachable) {
> > /* mov rax, qword ptr [rbp - rounded_stack_depth - 8] */
> > EMIT3_off32(0x48, 0x8B, 0x85,
> > -round_up(bpf_prog->aux->stack_depth, 8) - 8);
> > - if (!imm32 || emit_call(&prog, func, image + addrs[i - 1] + 7))
> > + if (!imm32)
> > return -EINVAL;
> > + offs = 7 + x86_call_depth_emit_accounting(&prog, func);
>
> It's a bit hard to read all the macro magic in patches 28-30,
> but I suspect the asm inside
> callthunk_desc.template
> that will be emitted here before the call
> will do
> some math on %rax
> movq %rax, PER_CPU_VAR(__x86_call_depth).
>
> Only %rax register is scratched by the callthunk_desc, right?
> If so, it's ok for all cases except this one.
> See the comment few lines above
> after if (tail_call_reachable)
> and commit ebf7d1f508a7 ("bpf, x64: rework pro/epilogue and tailcall
> handling in JIT")
> We use %rax to keep the tail_call count.
> The callthunk_desc would need to preserve %rax.
> I guess extra push %rax/pop %rax would do it.
The accounting template is basically:
sarq $5, PER_CPU_VAR(__x86_call_depth)
No registeres used (with debugging on it's a few more memops).
Powered by blists - more mailing lists