lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 13 May 2021 14:45:59 -0400
From:   Jamal Hadi Salim <jhs@...atatu.com>
To:     Joe Stringer <joe@...ium.io>, Cong Wang <xiyou.wangcong@...il.com>
Cc:     Alexei Starovoitov <alexei.starovoitov@...il.com>,
        Linux Kernel Network Developers <netdev@...r.kernel.org>,
        bpf <bpf@...r.kernel.org>,
        Xiongchun Duan <duanxiongchun@...edance.com>,
        Dongdong Wang <wangdongdong.6@...edance.com>,
        Muchun Song <songmuchun@...edance.com>,
        Cong Wang <cong.wang@...edance.com>,
        Alexei Starovoitov <ast@...nel.org>,
        Daniel Borkmann <daniel@...earbox.net>,
        Andrii Nakryiko <andrii@...nel.org>,
        Martin KaFai Lau <kafai@...com>,
        Song Liu <songliubraving@...com>, Yonghong Song <yhs@...com>,
        Pedro Tammela <pctammela@...atatu.com>
Subject: Re: [RFC Patch bpf-next] bpf: introduce bpf timer

On 2021-05-12 6:43 p.m., Jamal Hadi Salim wrote:

> 
> Will run some tests tomorrow to see the effect of batching vs nobatch
> and capture cost of syscalls and cpu.
> 

So here are some numbers:
Processor: Intel(R) Xeon(R) Gold 6230R CPU @ 2.10GHz
This machine is very similar to where a real deployment
would happen.

Hyperthreading turned off so we can dedicate the core to the
dumping process and Performance mode on, so no frequency scaling
meddling.
Tests were ran about 3 times each. Results eye-balled to make
sure deviation was reasonable.
100% of the one core was used just for dumping during each run.

bpftool does linear retrieval whereas our tool does batch dumping.
bpftool does print the dumped results, for our tool we just count
the number of entries retrieved (cost would have been higher if
we actually printed). In any case in the real setup there is
a processing cost which is much higher.

Summary is: the dumping is problematic costwise as the number of
entries increase. While batching does improve things it doesnt
solve our problem (Like i said we have upto 16M entries and most
of the time we are dumping useless things)

1M entries
----------

root@SUT:/home/jhs/git-trees/ftables/src# time ./ftables show system 
cache dev enp179s0f1 > /dev/null
real    0m0.320s
user    0m0.004s
sys     0m0.316s

root@SUT:/home/jhs/git-trees/ftables/src# time 
/home/jhs/git-trees/foobar/XDP/bpftool map dump  id 3353 > /dev/null
real    0m5.419s
user    0m4.347s
sys     0m1.072s

4M entries
-----------
root@SUT:/home/jhs/git-trees/ftables/src# time ./ftables show system cache
  dev enp179s0f1 > /dev/null
real    0m1.331s
user    0m0.004s
sys     0m1.325s

root@SUT:/home/jhs/git-trees/ftables/src# time 
/home/jhs/git-trees/foobar/XDP/bpftool map dump id 1178 > /dev/null
real    0m21.677s
user    0m17.269s
sys     0m4.408s
8M Entries
------------

root@SUT:/home/jhs/git-trees/ftables/src# time ./ftables show system 
cache dev enp179s0f1 > /dev/null
real    0m2.678s
user    0m0.004s
sys     0m2.672s

t@SUT:/home/jhs/git-trees/ftables/src# time 
/home/jhs/git-trees/foobar/XDP/bpftool map dump id 2636 > /dev/null
real    0m43.267s
user    0m34.450s
sys     0m8.817s

16M entries
------------
root@SUT:/home/jhs/git-trees/ftables/src# time ./ftables show system cache
  dev enp179s0f1 > /dev/null
real    0m5.396s
user    0m0.004s
sys     0m5.389s

root@SUT:/home/jhs/git-trees/ftables/src# time 
/home/jhs/git-trees/foobar/XDP/bpftool map dump id 1919 > /dev/null
real    1m27.039s
user    1m8.371s
sys     0m18.668s



cheers,
jamal

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ