lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090907110146.GB6393@nowhere>
Date:	Mon, 7 Sep 2009 13:01:49 +0200
From:	Frederic Weisbecker <fweisbec@...il.com>
To:	Nikos Chantziaras <realnc@...or.de>
Cc:	linux-kernel@...r.kernel.org, Jens Axboe <jens.axboe@...cle.com>,
	Ingo Molnar <mingo@...e.hu>, Con Kolivas <kernel@...ivas.org>
Subject: Re: BFS vs. mainline scheduler benchmarks and measurements

On Mon, Sep 07, 2009 at 06:38:36AM +0300, Nikos Chantziaras wrote:
> Unfortunately, I can't come up with any way to somehow benchmark all of  
> this.  There's no benchmark for "fluidity" and "responsiveness". Running 
> the Doom 3 benchmark, or any other benchmark, doesn't say anything about 
> responsiveness, it only measures how many frames were calculated in a 
> specific period of time.  How "stable" (with no stalls) those frames were 
> making it to the screen is not measurable.



That looks eventually benchmarkable. This is about latency.
For example, you could try to run high load tasks in the
background and then launch a task that wakes up in middle/large
periods to do something. You could measure the time it takes to wake
it up to perform what it wants.

We have some events tracing infrastructure in the kernel that can
snapshot the wake up and sched switch events.

Having CONFIG_EVENT_TRACING=y should be sufficient for that.

You just need to mount a debugfs point, say in /debug.

Then you can activate these sched events by doing:

echo 0 > /debug/tracing/tracing_on
echo 1 > /debug/tracing/events/sched/sched_switch/enable
echo 1 > /debug/tracing/events/sched/sched_wake_up/enable

#Launch your tasks

echo 1 > /debug/tracing/tracing_on

#Wait for some time

echo 0 > /debug/tracing/tracing_off

That will require some parsing of the result in /debug/tracing/trace
to get the delays between wake_up events and switch in events
for the task that periodically wakes up and then produce some
statistics such as the average or the maximum latency.

That's a bit of a rough approach to measure such latencies but that
should work.


> If BFS would imply small drops in pure performance counted in  
> instructions per seconds, that would be a totally acceptable regression  
> for desktop/multimedia/gaming PCs.  Not for server machines, of course.  
> However, on my machine, BFS is faster in classic workloads.  When I run 
> "make -j2" with BFS and the standard scheduler, BFS always finishes a bit 
> faster.  Not by much, but still.  One thing I'm noticing here is that BFS 
> produces 100% CPU load on each core with "make -j2" while the normal 
> scheduler stays at about 90-95% with -j2 or higher in at least one of the 
> cores.  There seems to be under-utilization of CPU time.



That also could be benchmarkable by using the above sched events and
look at the average time spent in a cpu to run the idle tasks.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ