lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090701131146.GR6760@one.firstfloor.org>
Date:	Wed, 1 Jul 2009 15:11:46 +0200
From:	Andi Kleen <andi@...stfloor.org>
To:	Tejun Heo <tj@...nel.org>
Cc:	Andi Kleen <andi@...stfloor.org>,
	Christoph Lameter <cl@...ux-foundation.org>,
	Ingo Molnar <mingo@...e.hu>,
	Andrew Morton <akpm@...ux-foundation.org>,
	linux-kernel@...r.kernel.org, x86@...nel.org,
	linux-arch@...r.kernel.org, hpa@...or.com, tglx@...utronix.de
Subject: Re: [PATCHSET] percpu: generalize first chunk allocators and improve lpage NUMA support

On Wed, Jul 01, 2009 at 09:53:06PM +0900, Tejun Heo wrote:
> It would be nice to have something to test cpu on/offlining
> automatically.  Something which keeps bringing cpus up and down as the
> system goes through stress testing.

That's an trivial shell script using echo into sysfs files. It doesn't
seem to be widely done (the last time I tried it I promptly ran
into some RCU bug). You need a large enough machine for it.

But most stress testing does not actually have good code coverage 
in my experience. It just runs the same small set of core code all over 
(you can check now, kernel gcov is finally in)

The tricky part is to actually test the code you want to test.

> > ending up with lots of badly tested code is to:
> 
> But I don't think it would be that drastic.  Most users are quite
> simple.

But how do you test them properly?  And educate
the driver writers?  Also it would likely increase code sizes drastically.

Philosophically I think code like that should be a simple
operation and turning all the per cpu init code into
callbacks is not simple. That makes everything more error prone.

And it's imho unclear if that is all worth it just to avoid 
wasting some memory in the "256 possible CPUs" case (which 
I doubt is particularly realistic anyways, at least I don't
know of any Hypervisor today that scales to 256 CPUs)

-Andi

-- 
ak@...ux.intel.com -- Speaking for myself only.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ