lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 13 Feb 2017 06:45:07 +0100
From:   Mike Galbraith <efault@....de>
To:     Tejun Heo <tj@...nel.org>, Peter Zijlstra <peterz@...radead.org>
Cc:     lizefan@...wei.com, hannes@...xchg.org, mingo@...hat.com,
        pjt@...gle.com, luto@...capital.net, cgroups@...r.kernel.org,
        linux-kernel@...r.kernel.org, kernel-team@...com,
        lvenanci@...hat.com,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCHSET for-4.11] cgroup: implement cgroup v2 thread mode

On Sun, 2017-02-12 at 07:59 +0100, Mike Galbraith wrote:
> On Sun, 2017-02-12 at 14:05 +0900, Tejun Heo wrote:
> 
> > > I think cgroup tree depth is a more significant issue; because of
> > > hierarchy we often do tree walks (uo-to-root or down-to-task).
> > > 
> > > So creating elaborate trees is something I try not to do.
> > 
> > So, as long as the depth stays reasonable (single digit or lower),
> > what we try to do is keeping tree traversal operations aggregated or
> > located on slow paths.  There still are places that this overhead
> > shows up (e.g. the block controllers aren't too optimized) but it
> > isn't particularly difficult to make a handful of layers not matter at
> > all.
> 
> A handful of cpu bean counting layers stings considerably.

BTW, that overhead is also why merging cpu/cpuacct is not really as
wonderful as it may seem on paper.  If you only want to account, you
may not have anything to gain from group scheduling (in fact it may
wreck performance), but you'll pay for it.
 
> homer:/abuild # pipe-test 1                          
> 2.010057 usecs/loop -- avg 2.010057 995.0 KHz
> 2.006630 usecs/loop -- avg 2.009714 995.2 KHz
> 2.127118 usecs/loop -- avg 2.021455 989.4 KHz
> 2.256244 usecs/loop -- avg 2.044934 978.0 KHz
> 1.993693 usecs/loop -- avg 2.039810 980.5 KHz
> ^C
> homer:/abuild # cgexec -g cpu:hurt pipe-test 1
> 2.771641 usecs/loop -- avg 2.771641 721.6 KHz
> 2.432333 usecs/loop -- avg 2.737710 730.5 KHz
> 2.750493 usecs/loop -- avg 2.738988 730.2 KHz
> 2.663203 usecs/loop -- avg 2.731410 732.2 KHz
> 2.762564 usecs/loop -- avg 2.734525 731.4 KHz
> ^C
> homer:/abuild # cgexec -g cpu:hurt/pain pipe-test 1
> 2.967201 usecs/loop -- avg 2.967201 674.0 KHz
> 3.049012 usecs/loop -- avg 2.975382 672.2 KHz
> 3.031226 usecs/loop -- avg 2.980966 670.9 KHz
> 2.954259 usecs/loop -- avg 2.978296 671.5 KHz
> 2.933432 usecs/loop -- avg 2.973809 672.5 KHz
> ^C
> ...
> homer:/abuild # cgexec -g cpu:hurt/pain/ouch/moan/groan pipe-test 1
> 4.417044 usecs/loop -- avg 4.417044 452.8 KHz
> 4.494913 usecs/loop -- avg 4.424831 452.0 KHz
> 4.253861 usecs/loop -- avg 4.407734 453.7 KHz
> 4.378059 usecs/loop -- avg 4.404766 454.1 KHz
> 4.179895 usecs/loop -- avg 4.382279 456.4 KHz

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ