lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 28 Nov 2016 16:13:09 -0500
From:   Dhaval Giani <dhaval.giani@...il.com>
To:     Marat Khalili <mkh@....ru>, Peter Zijlstra <peterz@...radead.org>,
        Mike Galbraith <efault@....de>,
        LKML <linux-kernel@...r.kernel.org>
Cc:     cgroups@...r.kernel.org
Subject: Re: cgroups and nice

[Resending because gmail doesn't understand when to go plaintext :-) ]
[Added a few other folks who might have something to say about it]

On Fri, Nov 25, 2016 at 9:34 AM, Marat Khalili <mkh@....ru> wrote:
> I have a question as a cgroup cpu limits user: how does it interact with
> nice? Documentation creates the impression that, as long as number of
> processes demanding the cpu time exceeds number of available cores, time
> allocated will be proportional to configured cpu.shares. However, in
> practice I observe that group with niced processes significantly under
> perform.
>
> For example, suppose on a 6-core box /cgroup/cpu/group1/cpu.shares is 400,
> and /cgroup/cpu/group2/cpu.shares is 200.
> 1) If I run `stress -c 6` in both groups, I should see approximately 400% of
> cpu time in group1 and 200% in group2 in top output, regardless of their
> relative nice value.
> 2) If I run `nice -n 19 stress -c 1` in cgroup1 and `stress -c 24` in
> group2, I should see at least 100% of cpu time in group1.
>
> What I see is significantly less cpu time in group1 if group1 processes
> happen to have greater nice value, and especially if group2 have greater
> number of processes involved: cpu load of group1 in example 2 can be as low
> as 20%. It may create tensions among users in my case; how can this be
> avoided except by renicing all processes to the same value?
>
>> $ uname -a
>> Linux redacted 2.6.32-642.11.1.el6.x86_64 #1 SMP Fri Nov 18 19:25:05 UTC
>> 2016 x86_64 x86_64 x86_64 GNU/Linux
>

This is an old version of the kernel. Do you see the same behavior on
a newer version of the kernel? (4.8 is the latest stable kernel)

>
>> $ lsb_release -a
>> LSB Version:
>> :base-4.0-amd64:base-4.0-noarch:core-4.0-amd64:core-4.0-noarch:graphics-4.0-amd64:graphics-4.0-noarch:printing-4.0-amd64:printing-4.0-noarch
>> Distributor ID: CentOS
>> Description:    CentOS release 6.8 (Final)
>> Release:        6.8
>> Codename:       Final
>
>
> (My apologies if I'm posting to incorrect list.)
>
> --
>
> With Best Regards,
> Marat Khalili
> --

Thanks,
Dhaval

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ