lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 29 Jan 2008 12:50:00 +0100
From:	Peter Zijlstra <a.p.zijlstra@...llo.nl>
To:	Paul Jackson <pj@....com>
Cc:	linux-kernel@...r.kernel.org, mingo@...e.hu,
	vatsa@...ux.vnet.ibm.com, dhaval@...ux.vnet.ibm.com,
	nickpiggin@...oo.com.au, ebiederm@...ssion.com,
	akpm@...ux-foundation.org, sgrubb@...hat.com, rostedt@...dmis.org,
	ghaskins@...ell.com, dmitry.adamushko@...il.com,
	tong.n.li@...el.com, tglx@...utronix.de, menage@...gle.com,
	rientjes@...gle.com
Subject: Re: scheduler scalability - cgroups, cpusets and load-balancing


On Tue, 2008-01-29 at 05:30 -0600, Paul Jackson wrote:
> Peter wrote, in reply to Peter ;):
> > > [ It looks to me it balances a group over the largest SD the current cpu
> > >   has access to, even though that might be larger than the SD associated
> > >   with the cpuset of that particular cgroup. ]
> > 
> > Hmm, with a bit more thought I think that does indeed DTRT. Because, if
> > the cpu belongs to a disjoint cpuset, the highest sd (with
> > load-balancing enabled) would be that. Right?
> 
> The code that defines sched domains, kernel/sched.c partition_sched_domains(),
> as called from the cpuset code in kernel/cpuset.c rebuild_sched_domains(),
> does not make use of the full range of sched_domain possibilities.
> 
> In particular, it only sets up some non-overlapping set of sched domains.
> Every CPU ends up in at most a single sched domain.

Ah, good to know. I thought it would reflect the hierarchy of the sets
themselves.

> The original reason that one can't define overlapping sched domains via
> this cpuset interface (based off the cpuset 'sched_load_balance' flag)
> is that I didn't realize it was even possible to overlap sched domains
> when I wrote the cpuset code defining sched domains.  And then when I
> later realized one could overlap sched domains, I (a) didn't see a need
> to do so, and (b) couldn't see how to do so via the cpuset interface
> without causing my brain to explode.

Good reason :-), this code needs all the reasons it can grasp to not
grow more complexity.

> Now, back to Peter's question, being a bit pedantic, CPUs don't belong
> to disjoint cpusets, except in the most minimal situation that there is
> only one cpuset covering all CPUs.
> 
> Rather what happens, when you have need for some realtime CPUs, is that:
>  1) you turn off sched_load_balance on the top cpuset,
>  2) you setup your realtime cpuset as a child cpuset of the top cpuset
>     such that its CPUs doesn't overlap any of its siblings, and
>  3) you turn off sched_load_balance in that realtime cpuset.

Ah, I don't think 3 is needed. Quite to the contrary, there is quite a
large body of research work covering the scheduling of (hard and soft)
realtime tasks on multiple cpus.

> At that point, sched domains are rebuilt, including providing a
> sched domain that just contains the CPUs in that realtime cpuset, and
> normal scheduler load balancing ceases on the CPUs in that realtime
> cpuset.

Right, which would also disable the realtime load-balancing we do want.
Hence my suggestion to stick the rt balance data in this sched domain.

> > [ Just a bit of a shame we have all cgroups represented on each cpu. ]
> 
> Could you restate this -- I suspect it's obvious, but I'm oblivious ;).

Ah, sure. struct task_group creates cfs_rq/rt_rq entities for each cpu's
runqueue. So an iteration like for_each_leaf_{cfs,rt}_rq() will touch
all task_groups/cgroups, not only those that are actually schedulable on
that cpu.

Now, I think that could be easily solved by adding/removing
{cfs,rt}_rq->leaf_{cfs,rt}_rq_list to/from rq->leaf_{cfs,rt}_rq_list on
enqueue of the first/dequeue of the last entity of its tg on that rq.



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ