lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 27 Oct 2008 19:33:12 +0100
From:	Ingo Molnar <mingo@...e.hu>
To:	Alan Cox <alan@...rguk.ukuu.org.uk>
Cc:	Jiri Kosina <jkosina@...e.cz>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Mike Galbraith <efault@....de>,
	David Miller <davem@...emloft.net>, rjw@...k.pl,
	s0mbre@...rvice.net.ru, linux-kernel@...r.kernel.org,
	netdev@...r.kernel.org
Subject: Re: [tbench regression fixes]: digging out smelly deadmen.


* Alan Cox <alan@...rguk.ukuu.org.uk> wrote:

> > The way to get the best possible dbench numbers in CPU-bound dbench 
> > runs, you have to throw away the scheduler completely, and do this 
> > instead:
> > 
> >  - first execute all requests of client 1
> >  - then execute all requests of client 2
> >  ....
> >  - execute all requests of client N
> 
> Rubbish. [...]

i've actually implemented that about a decade ago: i've tracked down 
what makes dbench tick, i've implemented the kernel heuristics for it 
to make dbench scale linearly with the number of clients - just to be 
shot down by Linus about my utter rubbish approach ;-)

> [...] If you do that you'll not get enough I/O in parallel to 
> schedule the disk well (not that most of our I/O schedulers are 
> doing the job well, and the vm writeback threads then mess it up and 
> the lack of Arjans ioprio fixes then totally screw you) </rant>

the best dbench results come from systems that have enough RAM to 
cache the full working set, and a filesystem intelligent enough to not 
insert bogus IO serialization cycles (ext3 is not such a filesystem).

The moment there's real IO it becomes harder to analyze but the same 
basic behavior remains: the more unfair the IO scheduler, the "better" 
dbench results we get.

	Ingo
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ