lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b6a2d2e20902100932i475c1ee8va99de6e433a6d89a@mail.gmail.com>
Date:	Tue, 10 Feb 2009 17:32:34 +0000
From:	Rolando Martins <rolando.martins@...il.com>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	linux-kernel@...r.kernel.org
Subject: Re: cgroup, RT reservation per core(s)?

On 2/10/09, Peter Zijlstra <peterz@...radead.org> wrote:
> On Tue, 2009-02-10 at 14:46 +0000, Rolando Martins wrote:
>  >
>  > >  I've never actually tried anything like this, let me know if it
>  > >  works ;-)
>  > >
>  >
>  > Thanks Peter, it works!
>
>  > I am thinking for different strategies to be used in my rt middleware
>  > project, and I think there is a limitation.
>  > If I wanted to have some RT on the B cpuset, I couldn't because I
>  > assigned A.cpu.rt_runtime_ns = root.cpu.rt_runtime_ns (then subdivided
>  > the A cpuset, with 2,3,4, each one having A.cpu.rt_runtime_ns/3).
>
>
> Try it, you can run RT proglets in B.
>
>  You get n*utilization per schedule domain, where n is the number of cpus
>  in it.
>
>  So you still have 200% left in B, even if you use 200% of A.
>
>
>  > This happens because there is a
>  > global /proc/sys/kernel/sched_rt_runtime_us and
>  > /proc/sys/kernel/sched_rt_period_us.
>
>
> These globals don't actually do much (except provide a global cap) in
>  the cgroup case.
>
>
>  > What do you think about adding a separate tuple (runtime,period) for
>  > each core/cpu?
>
>
> > Does this make sense? ;)
>
>  That's going to give me a horrible head-ache trying to load-balance
>  stuff.
>
Sorry Peter, but I didn't think before typing;)
I was looking at the cgroups as more integrated (rigid;) ) infrastructure,
and therefore using only a mount point for all the operations... :x

Now I got everything working properly! Thanks for the support.

For helping others:

mkdir /dev/cpuset
mount -t cgroup -o cpuset none /dev/cpuset
cd /dev/cpuset
echo 0 > cpuset.sched_load_balance
mkdir A
echo 0-1 > A/cpuset.cpus
echo 0 > A/cpuset.mems
mkdir B
echo 2-3 > B/cpuset.cpus
echo 0 > B/cpuset.mems


mount -t cgroup -o cpu none /dev/sched_domain
cd /dev/sched_domain
mkdir 1
echo cpu.rt_runtime_ns > 1/cpu.rt_runtime_ns
mkdir 1/2
echo 33333 > 1/2/cpu.rt_runtime_ns
mkdir 1/3
echo 33333 > 1/3/cpu.rt_runtime_ns
mkdir 1/4
echo 33333 > 1/3/cpu.rt_runtime_ns

For example, setting the current shell to a specific cpuset(A) and sched(1/2):

echo $$ > /dev/cpuset/A/tasks
echo $$ > /dev/sched_domain/1/2/tasks
"execute program"


Peter, can you confirm this code? ;)

Thanks!
Rol
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ