lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Thu, 14 Jan 2021 10:22:27 -0500 From: Paul Moore <paul@...l-moore.com> To: yang.yang29@....com.cn Cc: Eric Paris <eparis@...hat.com>, linux-audit@...hat.com, linux-kernel@...r.kernel.org Subject: Re: Fw:Re:[RFC,v1,1/1] audit: speed up syscall rule match while exiting syscall On Thu, Jan 14, 2021 at 8:25 AM <yang.yang29@....com.cn> wrote: > > Performance measurements: > 1.Environment > CPU: Intel(R) Core(TM) i7-6700 CPU @ 3.40GHz > Linux kernel version: 5.11-rc3 > Audit version: 2.8.4 > > 2.Result > 2.1 Syscall invocations > Test method: > Run command "top" with no-load. > Add rule likes "auditctl -a always,exit -F arch=b64 -S chmod -F auid=[number]" which doesn't hit audit. > User command "perf record -Rg -t [top's pid] sleep 900" to get audit_filter_syscall()'s execute time ratio. Thanks for providing some performance numbers so quickly, a few comments and thoughts below ... > audit_filter_syscall() ratio with 100 rules: > before this patch: 15.29%. > after this patch: 0.88%, reduce 14.41%. > audit_filter_syscall() ratio with CIS[1] rules: > before this patch: 2.25%. > after this patch: 1.93%%, reduce 0.32%. > audit_filter_syscall() ratio with 10 rules: > before this patch: 0.94%. > after this patch: 1.02%, increase 0.08%. > audit_filter_syscall() ratio with 1 rule: > before this patch: 0.20%. > after this patch: 0.88%, increase 0.68%. If we assume the CIS rules to be a reasonable common case (I'm not sure if that is correct or not, but we'll skip that discussion for now), we see an performance improvement of 0.32% correct, yes? We also see a performance regression with small number of syscall rules that equalizes above ten rules, yes? On your system can you provide some absolute numbers? For example, what does 0.32% equate to in terms of wall clock time for a given syscall invocation? > Analyse: > With 1 rule, after this patch performance is worse, because mutex_lock()/mutex_unlock(). But user just add one rule seems unusual. > With more rule, after this patch performance is improved.Typical likes CIS benchmark. > > 2.2 Rule change > Test method: > Use ktime_get_real_ts64() before and after audit_add_rule()/audit_del_rule() to calculate time. > Add/delete rule by command "auditctl". Each test 10times and get average. In this case I'm less concerned about micro benchmarks, and more interested in the wall clock time difference when running auditctl to add/remove rules. The difference here in the micro benchmark is not trivial, but with a delta of 4~5us it is possible that it is a small(er) percentage when compared to the total time spent executing auditctl. > audit_add_rule() time: > before this patch: 3120ns. > after this patch: 7783ns, increase 149%. > audit_del_rule() time: > before this patch: 3510ns. > after this patch: 8519ns, increase 143%. > > Analyse: > After this patch, rule change time obviously increase. But rule change may not happen very often. > > [1] CIS is a Linux Benchmarks for security purpose. > https://www.cisecurity.org/benchmark/distribution_independent_linux/ -- paul moore www.paul-moore.com
Powered by blists - more mailing lists