lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 24 Jul 2006 18:33:26 +0200
From:	Arjan van de Ven <arjan@...radead.org>
To:	Al Boldi <a1426z@...ab.com>
Cc:	linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org
Subject: Re: CFQ will be the new default IO scheduler - why?


> > Should there be a default scheduler per filesystem?  As some filesystems
> > may perform better/worse with one over another?
> 
> It's currently perDevice, and should probably be extended to perMount.

Hi,

per mount is going to be "not funny". I assume the situation you are
aiming for is the "3 partitions on a disk, each wants its own elevator".
The way the kernel currently works is that IO requests the filesystem
does are first flattened into an IO for the entire device (eg the
partition mapping is done) and THEN the IO scheduler gets involved to
schedule the IO on a per disk basis.
The 2.4 kernel did this the other way around, and it was really a bad
idea (no fairness, less optimal scheduling all around due to less
visibility into what the disk is really doing, several hardware
properties such as TCQ depth that affect scheduling IOs are truely per
disk not per partition etc etc)

So I don't think it's likely that per mount is really an option right
now..

Greetings,
   Arjan van de Ven

--
if you want to mail me at work, send mail to arjan (at) linux.intel.com

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ