[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <mb61p34r23dqa.fsf@kernel.org>
Date: Tue, 30 Apr 2024 18:30:21 +0000
From: Puranjay Mohan <puranjay@...nel.org>
To: Andrii Nakryiko <andrii.nakryiko@...il.com>
Cc: Catalin Marinas <catalin.marinas@....com>, Will Deacon
<will@...nel.org>, Alexei Starovoitov <ast@...nel.org>, Daniel Borkmann
<daniel@...earbox.net>, Andrii Nakryiko <andrii@...nel.org>, Martin KaFai
Lau <martin.lau@...ux.dev>, Eduard Zingerman <eddyz87@...il.com>, Song Liu
<song@...nel.org>, Yonghong Song <yonghong.song@...ux.dev>, John Fastabend
<john.fastabend@...il.com>, KP Singh <kpsingh@...nel.org>, Stanislav
Fomichev <sdf@...gle.com>, Hao Luo <haoluo@...gle.com>, Jiri Olsa
<jolsa@...nel.org>, Zi Shen Lim <zlim.lnx@...il.com>, Xu Kuohai
<xukuohai@...wei.com>, Florent Revest <revest@...omium.org>,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
bpf@...r.kernel.org
Subject: Re: [PATCH bpf-next v3 1/2] arm64, bpf: add internal-only MOV
instruction to resolve per-CPU addrs
Andrii Nakryiko <andrii.nakryiko@...il.com> writes:
> On Fri, Apr 26, 2024 at 9:55 AM Puranjay Mohan <puranjay@...nel.org> wrote:
>>
>> Andrii Nakryiko <andrii.nakryiko@...il.com> writes:
>>
>> > On Fri, Apr 26, 2024 at 5:14 AM Puranjay Mohan <puranjay@...nel.org> wrote:
>> >>
>> >> From: Puranjay Mohan <puranjay12@...il.com>
>> >>
>> >> Support an instruction for resolving absolute addresses of per-CPU
>> >> data from their per-CPU offsets. This instruction is internal-only and
>> >> users are not allowed to use them directly. They will only be used for
>> >> internal inlining optimizations for now between BPF verifier and BPF
>> >> JITs.
>> >>
>> >> Since commit 7158627686f0 ("arm64: percpu: implement optimised pcpu
>> >> access using tpidr_el1"), the per-cpu offset for the CPU is stored in
>> >> the tpidr_el1/2 register of that CPU.
>> >>
>> >> To support this BPF instruction in the ARM64 JIT, the following ARM64
>> >> instructions are emitted:
>> >>
>> >> mov dst, src // Move src to dst, if src != dst
>> >> mrs tmp, tpidr_el1/2 // Move per-cpu offset of the current cpu in tmp.
>> >> add dst, dst, tmp // Add the per cpu offset to the dst.
>> >>
>> >> To measure the performance improvement provided by this change, the
>> >> benchmark in [1] was used:
>> >>
>> >> Before:
>> >> glob-arr-inc : 23.597 ± 0.012M/s
>> >> arr-inc : 23.173 ± 0.019M/s
>> >> hash-inc : 12.186 ± 0.028M/s
>> >>
>> >> After:
>> >> glob-arr-inc : 23.819 ± 0.034M/s
>> >> arr-inc : 23.285 ± 0.017M/s
>> >
>> > I still expected a better improvement (global-arr-inc's results
>> > improved more than arr-inc, which is completely different from
>> > x86-64), but it's still a good thing to support this for arm64, of
>> > course.
>> >
>> > ack for generic parts I can understand:
>> >
>> > Acked-by: Andrii Nakryiko <andrii@...nel.org>
>> >
>>
>> I will have to do more research to find why we don't see very high
>> improvement.
>>
>> But this is what is happening here:
>>
>> This was the complete picture before inlining:
>>
>> int cpu = bpf_get_smp_processor_id();
>> mov x10, #0xffffffffffffd4a8
>> movk x10, #0x802c, lsl #16
>> movk x10, #0x8000, lsl #32
>> blr x10 ---------------------------------------> nop
>> nop
>> adrp x0, 0xffff800082128000
>> mrs x1, tpidr_el1
>> add x0, x0, #0x8
>> ldrsw x0, [x0, x1]
>> <----------------------------------------ret
>> add x7, x0, #0x0
>>
>>
>> Now we have:
>>
>> int cpu = bpf_get_smp_processor_id();
>> mov x7, #0xffff8000ffffffff
>> movk x7, #0x8212, lsl #16
>> movk x7, #0x8008
>> mrs x10, tpidr_el1
>> add x7, x7, x10
>> ldr w7, [x7]
>>
>>
>> So, we have removed multiple instructions including a branch and a
>> return. I was expecting to see more improvement. This benchmark is taken
>> from a KVM based virtual machine, maybe if I do it on bare-metal I would
>> see more improvement ?
>
> I see, yeah, I think it might change significantly. I remember back
> from times when I was benchmarking BPF ringbuf, I was getting
> very-very different results from inside QEMU vs bare metal. And I
> don't mean just in absolute numbers. QEMU/KVM seems to change a lot of
> things when it comes to contentions, atomic instructions, etc, etc.
> Anyways, for benchmarking, always try to do bare metal.
>
I found the solution to this. I am seeing much better performance when
implementing this inlining in the JIT through another method, similar to
what I did for riscv see[1]
[1] https://lore.kernel.org/all/20240430175834.33152-3-puranjay@kernel.org/
Will do the same for ARM64 in V5 of this series.
Thanks,
Puranjay
Powered by blists - more mailing lists