lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 10 May 2007 09:39:55 +0530
From:	Balbir Singh <balbir@...ux.vnet.ibm.com>
To:	Paul Jackson <pj@....com>
CC:	menage@...gle.com, vatsa@...ibm.com,
	ckrm-tech@...ts.sourceforge.net, balbir@...ibm.com,
	haveblue@...ibm.com, xemul@...ru, dev@...ru,
	containers@...ts.osdl.org, devel@...nvz.org, ebiederm@...ssion.com,
	mbligh@...gle.com, rohitseth@...gle.com, serue@...ibm.com,
	akpm@...ux-foundation.org, svaidy@...ux.vnet.ibm.com,
	linux-kernel@...r.kernel.org
Subject: Re: [ckrm-tech] [PATCH 1/9] Containers (V9): Basic container framework

Paul Jackson wrote:
> Balbir wrote:

> 
> 1) Testing batch schedulers against cpusets:
> 
>     I doubt that the batch scheduler developers would be able to
>     extract a cpuset test from their tests, or be able to share it if
>     they did.  Their tests tend to be large tests of batch schedulers,
>     and only incidentally test cpusets -- if we break cpusets,
>     in sometimes even subtle ways that they happen to depend on,
>     we break them.
> 
>     Sometimes there is no way to guess exactly what sorts of changes
>     will break their code; we'll just have to schedule at least one
>     run through one or more of them that rely heavily on cpusets
>     before a change as big as rebasing cpusets on containers is
>     reasonably safe.  This test cycle won't be all that easy, so I'd
>     wait until we are pretty close to what we think should be taken
>     into the mainline kernel.
> 
>     I suppose I will have to be the one co-ordinating this test,
>     as I am the only one I know with a presence in both camps.
> 
>     Once this test is done, from then forward, if we break them,
>     we'll just have to deal with it as we do now, when the breakage
>     shows up well down stream from the main kernel tree, at the point
>     that a major batch scheduler release runs into a major distribution
>     release containing the breakage.  There is no practical way that I
>     can see, as an ongoing basis, to continue testing for such breakage
>     with every minor change to cpuset related code in the kernel.  Any
>     breakage found this way is dealt with by changes in user level code.
> 
>     Once again, I have bcc'd one or more developers of batch schedulers,
>     so they can see what nonsense I am spouting about them now ;).
> 

That sounds reasonable to me

> 2) Testing cpusets with a specific test.
> 
>     There I can do better.  Attached is the cpuset regression test I
>     use.  It requires at least 4 cpus and 2 memory nodes to do anything
>     useful.  It is copyright by SGI, released under GPL license.
> 
>     This regression test is the primary cpuset test upon which I
>     relied during the development of cpusets, and continue to rely.
>     Except for one subtle race condition in the test itself, it has
>     not changed in the last two to three years.
> 
>     This test requires no user level code not found in an ordinary
>     distro.  It does require the taskset and numactl commands,
>     for the purposes of testing certain interactions with them.
>     It assumes that there are not other cpusets currently setup in
>     the system that happen to conflict with the ones it creates.
> 
>     See further comments within the test script itself.
> 

Thanks for the script. Would you like to contribute this script to
LTP for wider availability and testing?

-- 
	Warm Regards,
	Balbir Singh
	Linux Technology Center
	IBM, ISTL
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ