lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20061022234152.baaf4624.pj@sgi.com>
Date:	Sun, 22 Oct 2006 23:41:52 -0700
From:	Paul Jackson <pj@....com>
To:	Nick Piggin <nickpiggin@...oo.com.au>
Cc:	dino@...ibm.com, akpm@...l.org, mbligh@...gle.com,
	menage@...gle.com, Simon.Derr@...l.net,
	linux-kernel@...r.kernel.org, rohitseth@...gle.com, holt@....com,
	dipankar@...ibm.com, suresh.b.siddha@...el.com
Subject: Re: [RFC] cpuset: add interface to isolated cpus

Nick wrote:
> These are both part of the same larger solution, which is to
> partition domains. isolated CPUs are just the case of 1 CPU in
> its own domain (and that's how they are implemented now).

and later, he also wrote:
> I think this is much more of an automatic behind your back thing.

I got confused there.

I agree that if we can do a -good- job of it, then an implicit,
automatic solution is better for the problem of reducing sched domain
partition sizes on large systems than yet another manual knob.

But I thought that it was good idea, with general agreement, to provide
an explicit control of isolated cpus for the real-time folks, even if
under the covers it use sched domain partitions of size 1 to implement
it.

-- 
                  I won't rest till it's the best ...
                  Programmer, Linux Scalability
                  Paul Jackson <pj@....com> 1.925.600.0401
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ