lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <874jl2up49.fsf@all.your.base.are.belong.to.us>
Date:   Mon, 14 Aug 2023 08:14:46 +0200
From:   Björn Töpel <bjorn@...nel.org>
To:     Puranjay Mohan <puranjay12@...il.com>, paul.walmsley@...ive.com,
        palmer@...belt.com, aou@...s.berkeley.edu, pulehui@...wei.com,
        conor.dooley@...rochip.com, ast@...nel.org, daniel@...earbox.net,
        andrii@...nel.org, martin.lau@...ux.dev, song@...nel.org,
        yhs@...com, kpsingh@...nel.org, bpf@...r.kernel.org,
        linux-riscv@...ts.infradead.org, linux-kernel@...r.kernel.org
Cc:     puranjay12@...il.com
Subject: Re: [PATCH bpf-next 0/2] bpf, riscv: use BPF prog pack allocator in
 BPF JIT

Björn Töpel <bjorn@...nel.org> writes:

> Puranjay Mohan <puranjay12@...il.com> writes:
>
>> BPF programs currently consume a page each on RISCV. For systems with many BPF
>> programs, this adds significant pressure to instruction TLB. High iTLB pressure
>> usually causes slow down for the whole system.
>>
>> Song Liu introduced the BPF prog pack allocator[1] to mitigate the above issue.
>> It packs multiple BPF programs into a single huge page. It is currently only
>> enabled for the x86_64 BPF JIT.
>>
>> I enabled this allocator on the ARM64 BPF JIT[2]. It is being reviewed now.
>>
>> This patch series enables the BPF prog pack allocator for the RISCV BPF JIT.
>> This series needs a patch[3] from the ARM64 series to work.
>>
>> ======================================================
>> Performance Analysis of prog pack allocator on RISCV64
>> ======================================================
>>
>> Test setup:
>> ===========
>>
>> Host machine: Debian GNU/Linux 11 (bullseye)
>> Qemu Version: QEMU emulator version 8.0.3 (Debian 1:8.0.3+dfsg-1)
>> u-boot-qemu Version: 2023.07+dfsg-1
>> opensbi Version: 1.3-1
>>
>> To test the performance of the BPF prog pack allocator on RV, a stresser
>> tool[4] linked below was built. This tool loads 8 BPF programs on the system and
>> triggers 5 of them in an infinite loop by doing system calls.
>>
>> The runner script starts 20 instances of the above which loads 8*20=160 BPF
>> programs on the system, 5*20=100 of which are being constantly triggered.
>> The script is passed a command which would be run in the above environment.
>>
>> The script was run with following perf command:
>> ./run.sh "perf stat -a \
>>         -e iTLB-load-misses \
>>         -e dTLB-load-misses  \
>>         -e dTLB-store-misses \
>>         -e instructions \
>>         --timeout 60000"
>>
>> The output of the above command is discussed below before and after enabling the
>> BPF prog pack allocator.
>>
>> The tests were run on qemu-system-riscv64 with 8 cpus, 16G memory. The rootfs
>> was created using Bjorn's riscv-cross-builder[5] docker container linked below.
>
> Back in the saddle! Sorry for the horribly late reply...
>
> Did you run the test_progs kselftest test, and passed w/o regressions? I
> ran a test without/with your series (plus the patch from the arm64
> series that you pointed out), and I'm getting regressions with this
> series:
>
> w/o Summary: 318/3114 PASSED, 27 SKIPPED, 60 FAILED
> w/  Summary: 299/3026 PASSED, 33 SKIPPED, 79 FAILED
>
> I'm did the test on commit 4c75bf7e4a0e ("Merge tag
> 'kbuild-fixes-v6.5-2' of
> git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild").
>
> I'm re-running, and investigating now.

I had a bad environment on for the rebuild; A proper rebuild worked. No
regressions. Sorry for the noise!

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ