lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 21 Oct 2006 13:55:16 -0700
From:	Paul Jackson <pj@....com>
To:	"Paul Menage" <menage@...gle.com>
Cc:	mbligh@...gle.com, nickpiggin@...oo.com.au, akpm@...l.org,
	Simon.Derr@...l.net, linux-kernel@...r.kernel.org, dino@...ibm.com,
	rohitseth@...gle.com, holt@....com, dipankar@...ibm.com,
	suresh.b.siddha@...el.com
Subject: Re: [RFC] cpuset: remove sched domain hooks from cpusets

> I'm not very familiar
> with the sched domains code but I guess it doesn't handle overlapping
> cpu masks very well?

As best as I can tell, the two motivations for explicity setting
sched domain partitions are:
 1) isolating cpus for real time uses very sensitive to any interference,
 2) handling load balancing on huge CPU counts, where the worse than linear
    algorithms start to hurt.

The load balancing algorithms apparently should be close to linear, but
in the presence of many disallowed cpus (0 bits in tasks cpus_allowed),
I guess they have to work harder.

I still have little confidence that I understand this.  Maybe if I say
enough stupid things about the scheduler domains and load balancing,
someone will get annoyed and try to educate me ;).  Best of luck to
them.

It doesn't sound to me like your situation is a real time, very low
latency or jigger, sensitive application.

How many CPUs are you juggling?  My utterly naive expectation would be
that dozens of CPUs should not need explicit sched domain partitioning,
but that hundreds of them would benefit from reduced time spent in
kernel/sched.c code if the sched domains were able to be partitioned
down to a significantly smaller size.

The only problem I can see that overlapping cpus_allowed masks presents
to this is that it inhibits partitioning down to smaller sched domains.
Apparently these partitions are system-wide hard partitions, such that
no load balancing occurs across partitions, so we should avoid creating
a partition that cuts across some tasks cpus_allowed mask.

-- 
                  I won't rest till it's the best ...
                  Programmer, Linux Scalability
                  Paul Jackson <pj@....com> 1.925.600.0401
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ