lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1234281602.23438.96.camel@twins>
Date:	Tue, 10 Feb 2009 17:00:02 +0100
From:	Peter Zijlstra <peterz@...radead.org>
To:	Rolando Martins <rolando.martins@...il.com>
Cc:	linux-kernel@...r.kernel.org
Subject: Re: cgroup, RT reservation per core(s)?

On Tue, 2009-02-10 at 14:46 +0000, Rolando Martins wrote:
> 
> >  I've never actually tried anything like this, let me know if it
> >  works ;-)
> >
> 
> Thanks Peter, it works!

> I am thinking for different strategies to be used in my rt middleware
> project, and I think there is a limitation.
> If I wanted to have some RT on the B cpuset, I couldn't because I
> assigned A.cpu.rt_runtime_ns = root.cpu.rt_runtime_ns (then subdivided
> the A cpuset, with 2,3,4, each one having A.cpu.rt_runtime_ns/3).

Try it, you can run RT proglets in B.

You get n*utilization per schedule domain, where n is the number of cpus
in it.

So you still have 200% left in B, even if you use 200% of A.

> This happens because there is a
> global /proc/sys/kernel/sched_rt_runtime_us and
> /proc/sys/kernel/sched_rt_period_us.

These globals don't actually do much (except provide a global cap) in
the cgroup case.

> What do you think about adding a separate tuple (runtime,period) for
> each core/cpu?

> Does this make sense? ;)

That's going to give me a horrible head-ache trying to load-balance
stuff.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ