lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 25 Nov 2016 14:02:53 +0100
From:   Mike Galbraith <efault@....de>
To:     "Michael Kerrisk (man-pages)" <mtk.manpages@...il.com>
Cc:     Peter Zijlstra <a.p.zijlstra@...llo.nl>,
        Ingo Molnar <mingo@...nel.org>,
        linux-man <linux-man@...r.kernel.org>,
        lkml <linux-kernel@...r.kernel.org>,
        Thomas Gleixner <tglx@...utronix.de>
Subject: Re: RFC: documentation of the autogroup feature [v2]

On Thu, 2016-11-24 at 22:41 +0100, Michael Kerrisk (man-pages) wrote:

>        Suppose  that  there  are two autogroups competing for the same
>        CPU.  The first group contains ten CPU-bound processes  from  a
>        kernel build started with make -j10.  The other contains a sin‐
>        gle CPU-bound process: a video player.   The  effect  of  auto‐
>        grouping  is  that the two groups will each receive half of the
>        CPU cycles.  That is, the video player will receive 50% of  the
>        CPU  cycles,  rather  just 9% of the cycles, which would likely
>        lead to degraded video playback.  Or to put things another way:
>        an  autogroup  that  contains  a large number of CPU-bound pro‐
>        cesses does not end up overwhelming the CPU at the  expense  of
>        the other jobs on the system.

I'd say something more wishy-washy here, like cycles are distributed
fairly across groups and leave it at that, as your detailed example is
incorrect due to SMP fairness (which I don't like much because [very
unlikely] worst case scenario renders a box sized group incapable of
utilizing more that a single CPU total).  For example, if a group of
NR_CPUS size competes with a singleton, load balancing will try to give
the singleton a full CPU of its very own.  If groups intersect for
whatever reason on say my quad lappy, distribution is 80/20 in favor of
the singleton.

>        ┌─────────────────────────────────────────────────────┐
>        │FIXME                                                │
>        ├─────────────────────────────────────────────────────┤
>        │How do the nice value of  a  process  and  the  nice │
>        │value of an autogroup interact? Which has priority?  │
>        │                                                     │
>        │It  *appears*  that the autogroup nice value is used │
>        │for CPU distribution between task groups,  and  that │
>        │the  process nice value has no effect there.  (I.e., │
>        │suppose two  autogroups  each  contain  a  CPU-bound │
>        │process,  with  one  process  having nice==0 and the │
>        │other having nice==19.  It appears  that  they  each │
>        │get  50%  of  the CPU.)  It appears that the process │
>        │nice value has effect only with respect to  schedul‐ │
>        │ing  relative to other processes in the *same* auto‐ │
>        │group.  Is this correct?                             │
>        └─────────────────────────────────────────────────────┘

Yup, entity nice level affects distribution among peer entities.

	-Mike

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ