lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID:  <46536D52.9010707@tmr.com>
Date:	Tue, 22 May 2007 18:23:14 -0400
From:	Bill Davidsen <davidsen@....com>
To:	linux-kernel@...r.kernel.org
Cc:	Linux Kernel mailing List <linux-kernel@...r.kernel.org>
Subject:  Re: Scheduling tests on IPC methods, fc6, sd0.48, cfs12

Ingo Molnar wrote:
> * Bill Davidsen <davidsen@....com> wrote:
> 
>> I have posted the results of my initial testing, measuring IPC rates 
>> using various schedulers under no load, limited nice load, and heavy 
>> load at nice 0.
>>
>> http://www.tmr.com/~davidsen/ctxbench_testing.html
> 
> nice! For this to become really representative though i'd like to ask 
> for a real workload function to be used after the task gets the 
> lock/message. The reason is that there is an inherent balancing conflict 
> in this area: should the scheduler 'spread' tasks to other CPUs or not? 
> In general, for all workloads that matter, the answer is almost always: 
> 'yes, it should'.
> 
Added to the short to-do list. Note that this was originally simply a 
check to see which IPC works best (or at all) in an o/s. It has been 
useful for some other things, and an option for work will be forthcoming.

> But in your ctxbench results the work a task performs after doing IPC is 
> not reflected (the benchmark goes about to do the next IPC - hence 
> penalizing scheduling strategies that move tasks to other CPUs) - hence 
> the bonus of a scheduler properly spreading out tasks is not measured 
> fairly. A real-life IPC workload is rarely just about messaging around 
> (a single task could do that itself) - some real workload function is 
> used. You can see this effect yourself: do a "taskset -p 01 $$" before 
> running ctxbench and you'll see the numbers improve significantly on all 
> of the schedulers.
> 
> As a solution i'd suggest to add a workload function with a 100 or 200 
> usecs (or larger) cost (as a fixed-length loop or something like that) 
> so that the 'spreading' effect/benefit gets measured fairly too.
> 
Can do.

-- 
Bill Davidsen <davidsen@....com>
   "We have more to fear from the bungling of the incompetent than from
the machinations of the wicked."  - from Slashdot

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ