lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID:  <49DA7829.5090709@tmr.com>
Date:	Mon, 06 Apr 2009 17:46:17 -0400
From:	Bill Davidsen <davidsen@....com>
To:	linux-kernel@...r.kernel.org
Cc:	Jens Axboe <jens.axboe@...cle.com>, Nick Piggin <npiggin@...e.de>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Lennart Sorensen <lsorense@...lub.uwaterloo.ca>,
	Andrew Morton <akpm@...ux-foundation.org>, tytso@....edu,
	drees76@...il.com, jesper@...gh.cc,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>
Subject:  Re: Linux 2.6.29

Ingo Molnar wrote:

> Ergo, i think pluggable designs for something as critical and as 
> central as IO scheduling has its clear downsides as it created two 
> mediocre schedulers:
> 
>  - CFQ with all the modern features but performance problems on 
>    certain workloads
> 
>  - Anticipatory with legacy features only but works (much!) better 
>    on some workloads.
> 
> ... instead of giving us just a single well-working CFQ scheduler.
> 
> This, IMHO, in its current form, seems to trump the upsides of IO 
> schedulers.
> 
> So i do think that late during development (i.e. now), _years_ down 
> the line, we should make it gradually harder for people to use AS.
> 
I rarely disagree with you, and more rarely feel like arguing a point in public, 
but you are basing your whole opinion on the premise that it is possible to have 
one io scheduler which handles all cases. And that seems obviously wrong, 
because you address different types of activity with tuning or adapting, in some 
cases you need a whole different approach, and you need to lock in that approach 
even if some metric says something else would be better for the "better" seen by 
the developer rather than the user.


> What do you think?
> 
I think that by trying to create "one size fits all" you will hit a significant 
number of cases where it really doesn't fit well and you have so many tuning 
features both automatic and manual that you wind up with code which is big, 
inefficient, confusing to tune, hard to maintain, and generally not optimal for 
any one thing.

What we have is easy to test and the behavior is different enough in most cases 
that you can tell which is best, or at least that a change didn't help. I have 
watched long threads and chats about tuning VM (dirty_*, swappiness, etc) to be 
aware that in most cases either faster disk or more memory is the answer, not 
tuning to be "less unsatisfactory." Several distinct io schedulers is good, one 
complex bland one would not be.

-- 
Bill Davidsen <davidsen@....com>
   "We have more to fear from the bungling of the incompetent than from
the machinations of the wicked."  - from Slashdot

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ