lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 9 Apr 2007 11:44:14 -0700
From:	William Lee Irwin III <wli@...omorphy.com>
To:	Ingo Molnar <mingo@...e.hu>
Cc:	Mike Galbraith <efault@....de>,
	Gene Heskett <gene.heskett@...il.com>,
	linux-kernel@...r.kernel.org, Con Kolivas <kernel@...ivas.org>,
	Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: Ten percent test

* William Lee Irwin III <wli@...omorphy.com> wrote:
>> I strongly suggest assembling a battery of cleanly and properly 
>> written, configurable testcases, and scripting a series of regression 
>> tests as opposed to just randomly running kernel compiles and relying 
>> on Braille.

On Mon, Apr 09, 2007 at 08:03:56PM +0200, Ingo Molnar wrote:
> there's interbench, written by Con (with the purpose of improving 
> RSDL/SD), which does exactly that, but vanilla and SD performs quite the 
> same in those tests.
> it's quite hard to test interactivity, because it's both subjective and 
> because even for objective workloads, things depend so much on exact 
> circumstances. So the best way is to wait for actual complaints, and/or 
> actual testcases that trigger badness, and victims^H^H^H^H^H testers.
> (also note that often it needs _that precise_ workload to trigger some 
> badness. For example make -j depends on the kind of X shell terminal 
> that is used - gterm behaves differently from xterm, etc.)

Interactivity will probably have to stay squishy. The DoS affairs like
fiftyp.c, tenp.c, etc. are more of what I had in mind. There are also
a number of instances where CPU bandwidth distributions are gauged by
top(1) with noninteractive tests where the scriptable testcase affair
should be coming into play.

There are other, relatively obvious testcases for basic functionality
missing, too. For instance, where is the testcase to prove that nice
levels have the intended effect upon CPU bandwidth distribution between
sets of CPU-bound tasks? Or one that gauges the CPU bandwidth
distribution between a task that sleeps some (command-line configurable)
percentage of the time and some (command-line configurable) number of
competing CPU-bound tasks? Or one that gauges the CPU bandwidth
distribution between sets of cooperating processes competing with
ordinary CPU-bound processes? Can it be proven that any of this is
staying constant across interactivity or other changes? Is any of it
being changed as an unintended side-effect? Are the CPU bandwidth
distributions among such sets of competing tasks even consciously decided?

There should be readily-available answers to these questions, but they
are not so.


-- wli
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ