lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 13 Mar 2017 15:26:21 -0400
From:   Tejun Heo <tj@...nel.org>
To:     Mike Galbraith <efault@....de>
Cc:     Peter Zijlstra <peterz@...radead.org>, lizefan@...wei.com,
        hannes@...xchg.org, mingo@...hat.com, pjt@...gle.com,
        luto@...capital.net, cgroups@...r.kernel.org,
        linux-kernel@...r.kernel.org, kernel-team@...com,
        lvenanci@...hat.com,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCHSET for-4.11] cgroup: implement cgroup v2 thread mode

Hello, Mike.

Sorry about the long delay.

On Mon, Feb 13, 2017 at 06:45:07AM +0100, Mike Galbraith wrote:
> > > So, as long as the depth stays reasonable (single digit or lower),
> > > what we try to do is keeping tree traversal operations aggregated or
> > > located on slow paths.  There still are places that this overhead
> > > shows up (e.g. the block controllers aren't too optimized) but it
> > > isn't particularly difficult to make a handful of layers not matter at
> > > all.
> > 
> > A handful of cpu bean counting layers stings considerably.

Hmm... yeah, I was trying to think about ways to avoid full scheduling
overhead at each layer (the scheduler does a lot per each layer of
scheduling) but don't think it's possible to circumvent that without
introducing a whole lot of scheduling artifacts.

In a lot of workloads, the added overhead from several layers of CPU
controllers doesn't seem to get in the way too much (most threads do
something other than scheduling after all).  The only major issue that
we're seeing in the fleet is the cgroup iteration in idle rebalancing
code pushing up the scheduling latency too much but that's a different
issue.

Anyways, I understand that there are cases where people would want to
avoid any extra layers.  I'll continue on PeterZ's message.

> BTW, that overhead is also why merging cpu/cpuacct is not really as
> wonderful as it may seem on paper.  If you only want to account, you
> may not have anything to gain from group scheduling (in fact it may
> wreck performance), but you'll pay for it.

There's another reason why we would want accounting separate - because
weight based controllers, cpu and io currently, can't be enabled
without affecting the scheduling behavior.  However, they're different
from CPU controllers in that all the heavy part of operations can be
shifted to the readers (we just need to do per-cpu updates from hot
paths), so we might as well publish those stats by default on the v2
hierarchy.  We couldn't do the same in v1 because the number of
hierarchies were not limited.

Thanks.

-- 
tejun

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ