lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20061106124948.GA3027@in.ibm.com>
Date:	Mon, 6 Nov 2006 18:19:48 +0530
From:	Srivatsa Vaddagiri <vatsa@...ibm.com>
To:	"Paul Menage" <menage@...gle.com>
Cc:	"Paul Jackson" <pj@....com>, dev@...nvz.org, sekharan@...ibm.com,
	ckrm-tech@...ts.sourceforge.net, balbir@...ibm.com,
	haveblue@...ibm.com, linux-kernel@...r.kernel.org,
	matthltc@...ibm.com, dipankar@...ibm.com, rohitseth@...gle.com
Subject: Re: [ckrm-tech] [RFC] Resource Management - Infrastructure choices

On Wed, Nov 01, 2006 at 03:37:12PM -0800, Paul Menage wrote:
> I saw your example, but can you give a concrete example of a situation
> when you might want to do that?

Paul,
	Firstly, after some more thought on this, we can use your current
proposal, if it makes the implementation simpler.

Secondly, regarding how separate grouping per-resource *maybe* usefull,
consider this scenario.

A large university server has various users - students, professors,
system tasks etc. The resource planning for this server could be on these lines:

	CPU : 		Top cpuset 
			/	\   
		CPUSet1 	CPUSet2
		   |		  |
		(Profs)		(Students)

		In addition (system tasks) are attached to topcpuset (so
		that they can run anywhere) with a limit of 20%

	Memory : Professors (50%), students (30%), system (20%)

	Disk : Prof (50%), students (30%), system (20%)

	Network : WWW browsing (20%), Network File System (60%), others (20%)
				/ \
 			Prof (15%) students (5%)

Browsers like firefox/lynx go into the WWW network class, while (k)nfsd go 
into NFS network class.

At the same time firefox/lynx will share an appropriate CPU/Memory class 
depending on who launched it (prof/student).

If we had the ability to write pids directly to these resource classes,
then admin can easily setup a script which receives exec notifications
and depending on who is launching the browser he can 
	
	# echo browser_pid > approp_resource_class

With your proposal, he now would have to create a separate container for
every browser launched and associate it with approp network and other
resource class. This may lead to proliferation of such containers.

Also lets say that the administrator would like to give enhanced network
access temporarily to a student's browser (since it is night and the user
wants to do online gaming :)  OR give one of the students simulation
apps enhanced CPU power, 

With ability to write pids directly to resource classes, its just a
matter of :

	# echo pid > new_cpu/network_class
	(after some time)
	# echo pid > old_cpu/network_class

Without this ability, he will have to split the container into a
separate one and then associate the container with the new resource
classes.

So yes, the end result is perhaps achievable either way, the big
different I see is the ease of use.

> For simplicity combined with flexibility, I think I still favour the
> following model:
> 
> - all processes are a member of one container
> - for each resource type, each container is either in the same
> resource node as its parent or a freshly child node of the parent
> resource node (determined at container creation time)
> 
> This is a subset of my more complex model, but it's pretty easy to
> understand from userspace and to implement in the kernel.

If this model makes the implementation simpler, then I am for it, until
we have gained better insight on its use.

> What objections do you have to David's suggestion hat if you want some
> processes in a container to be in one resource node and others in
> another resource node, then you should just subdivide into two
> containers, such that all processes in a container are in the same set
> of resource nodes?

One observation is the ease of use (as some of the examples above
point out). Other is that it could lead to more containers than
necessary.

-- 
Regards,
vatsa
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ