lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 25 Feb 2008 20:05:38 -0600
From:	Paul Jackson <pj@....com>
To:	Max Krasnyansky <maxk@...lcomm.com>
Cc:	kosaki.motohiro@...fujitsu.com, alan@...rguk.ukuu.org.uk,
	adilger@....com, akpm@...ux-foundation.org, daniel.spang@...il.com,
	mingo@...e.hu, jonathan@...masters.org, marcelo@...ck.org,
	menage@...gle.com, pavel@....cz, a.p.zijlstra@...llo.nl,
	riel@...hat.com, linux-kernel@...r.kernel.org
Subject: Re: Tiny cpusets -- cpusets for small systems?

> So. I see cpusets as a higher level API/mechanism and cpu_isolated_map as lower
> level mechanism that actually makes kernel aware of what's isolated what's not.
> Kind of like sched domain/cpuset relationship. ie cpusets affect sched domains
> but scheduler does not use cpusets directly.

One could use cpusets to control the setting of cpu_isolated_map,
separate from the code such as your select_irq_affinity() that
uses it.


> In a foreseeable future 2-8 cores will be most common configuration.
> Do you think that cpusets are needed/useful for those machines ?
> The reason I'm asking is because given the restrictions you mentioned
> above it seems that you might as well just do
> 	taskset -c 1,2,3 app1
> 	taskset -c 3,4,5 app2 

People tend to manage the CPU and memory placement of the threads
and processes within a single co-operating job using taskset
(sched_setaffinity) and numactl (mbind, set_mempolicy.)

They tend to manage the placement of multiple unrelated jobs onto
a single system, whether on separate or shared CPUs and nodes,
using cpusets.

Something like cpu_isolated_map looks to me like a system-wide
mechanism, which should, like sched_domains, be managed system-wide.
Managing it with a mechanism that encourages each thread to update
it directly, as if that thread owned the system, will break down,
resulting in conflicting updates, as multiple, insufficiently
co-operating threads issue conflicting settings.


> Stuff that I'm working on this days (wireless basestations) is designed
> with the  following model:
> 	cpuN - runs soft-RT networking and management code
> 	cpuN+1 to cpuN+x - are used as dedicated engines
> ie Simplest example would be
> 	cpu0 - runs IP, L2 and control plane
> 	cpu1 - runs hard-RT MAC 
> 
> So if CPU isolation is implemented on top of the cpusets what kind of API do 
> you envision for such an app ?

That depends on what more API is needed.  Do we need to place
irqs better ... cpusets might not be a natural for that use.
Aren't irqs directed to specific CPUs, not to hierarchically
nested subsets of CPUs.

Separate question:
  Is it desired that the dedicated CPUs cpuN+1 ... cpuN+x even appear
  as general purpose systems running a Linux kernel in your systems?
  These dedicated engines seem more like intelligent devices to me,
  such as disk controllers, which the kernel controls via device
  drivers, not by loading itself on them too.

-- 
                  I won't rest till it's the best ...
                  Programmer, Linux Scalability
                  Paul Jackson <pj@....com> 1.940.382.4214
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ