[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <161298984230.3996968.4640881413498941015.b4-ty@chromium.org>
Date: Wed, 10 Feb 2021 12:44:07 -0800
From: Kees Cook <keescook@...omium.org>
To: wanghongzhe <wanghongzhe@...wei.com>, luto@...capital.net
Cc: Kees Cook <keescook@...omium.org>, bpf@...r.kernel.org, yhs@...com,
netdev@...r.kernel.org, ast@...nel.org,
linux-kernel@...r.kernel.org, andrii@...nel.org,
daniel@...earbox.net, songliubraving@...com, kafai@...com,
kpsingh@...nel.org, john.fastabend@...il.com, wad@...omium.org
Subject: Re: [PATCH v2] seccomp: Improve performace by optimizing rmb()
On Fri, 5 Feb 2021 11:34:09 +0800, wanghongzhe wrote:
> According to kees's suggest, we started with the patch that just replaces
> rmb() with smp_rmb() and did a performace test with UnixBench. The results
> showed the overhead about 2.53% in rmb() test compared to the smp_rmb()
> one, in a x86-64 kernel with CONFIG_SMP enabled running inside a qemu-kvm
> vm. The test is a "syscall" testcase in UnixBench, which executes 5
> syscalls in a loop during a certain timeout (100 second in our test) and
> counts the total number of executions of this 5-syscall sequence. We set a
> seccomp filter with all allow rule for all used syscalls in this test
> (which will go bitmap path) to make sure the rmb() will be executed. The
> details for the test:
>
> [...]
Applied to for-next/seccomp, thanks!
[1/1] seccomp: Improve performace by optimizing rmb()
https://git.kernel.org/kees/c/a381b70a1cf8
--
Kees Cook
Powered by blists - more mailing lists