lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALCETrX78_twPUkNTZbtXQv9yBnW+-c9RhetassdrQKicOiDFg@mail.gmail.com>
Date:	Thu, 20 Sep 2012 11:39:46 -0700
From:	Andy Lutomirski <luto@...capital.net>
To:	Tejun Heo <tj@...nel.org>
Cc:	containers@...ts.linux-foundation.org, cgroups@...r.kernel.org,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Neil Horman <nhorman@...driver.org>,
	Michal Hocko <mhocko@...e.cz>,
	Paul Mackerras <paulus@...ba.org>,
	"Aneesh Kumar K.V" <aneesh.kumar@...ux.vnet.ibm.com>,
	Arnaldo Carvalho de Melo <acme@...stprotocols.net>,
	Johannes Weiner <hannes@...xchg.org>,
	Thomas Graf <tgraf@...g.ch>, Paul Turner <pjt@...gle.com>,
	Ingo Molnar <mingo@...e.hu>, serge.hallyn@...onical.com
Subject: Re: [RFC] cgroup TODOs

On Thu, Sep 20, 2012 at 11:26 AM, Tejun Heo <tj@...nel.org> wrote:
> Hello,
>
> On Wed, Sep 19, 2012 at 06:33:15PM -0700, Andy Lutomirski wrote:
>> [grr.  why does gmane scramble addresses?]
>
> You can append /raw to the message url and see the raw mssage.
>
>   http://article.gmane.org/gmane.linux.kernel.containers/23802/raw

Thanks!

>
>> >   I think this level of flexibility should be enough for most use
>> >   cases.  If someone disagrees, please voice your objections now.
>>
>> OK, I'll bite.
>>
>> I have a server that has a whole bunch of cores.  A small fraction of
>> those cores are general purpose and run whatever they like.  The rest
>> are tightly controlled.
>>
>> For simplicity, we have two cpusets that we use.  The root allows all
>> cpus.  The other one only allows the general purpose cpus.  We shove
>> everything into the general-purpose-only cpuset, and then we move
>> special stuff back to root.  (We also shove some kernel threads into a
>> non-root cpuset using the 'cset' tool.)
>
> Using root for special stuff probably isn't a good idea and moving
> bound kthreads into !root cgroups is already disallowed.

Agreed.  I do it this way because it's easy and it works.  I can
change it in the future if needed.

>
>> Enter systemd, which wants a hierarchy corresponding to services.  If we
>> were to use it, we might end up violating its hierarchy.
>>
>> Alternatively, if we started using memcg, then we might have some tasks
>> to have more restrictive memory usage but less restrictive cpu usage.
>>
>> As long as we can still pull this off, I'm happy.
>
> IIUC, you basically want just two groups w/ cpuset and use it for
> loose cpu ioslation for high priority jobs.  Structure-wise, I don't
> think it's gonna be a problem although using root for special stuff
> would need to change.

Right.

But what happens when multiple hierarchies go away and I lose control
of the structure?  If systemd or whatever sticks my whole session or
my service (or however I organize it) into cgroup /whatever, then
either I can put my use-all-cpus tasks into /whatever/everything or I
can step outside the hierarchy and put them into /everything.  The
former doesn't work, because

<quote>
The following rules apply to each cpuset:

 - Its CPUs and Memory Nodes must be a subset of its parents.
</quote>

The latter might confuse systemd.

My real objection might be to that requirement a cpuset can't be less
restrictive than its parents.  Currently I can arrange for a task to
simultaneously have a less restrictive cpuset and a more restrictive
memory limit (or to stick it into a container or whatever).  If the
hierarchies have to correspond, this stops working.

--Andy
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ