lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <44C5A529.9060306@linux.intel.com>
Date:	Tue, 25 Jul 2006 06:59:21 +0200
From:	Arjan van de Ven <arjan@...ux.intel.com>
To:	Al Boldi <a1426z@...ab.com>
CC:	Arjan van de Ven <arjan@...radead.org>,
	linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org
Subject: Re: CFQ will be the new default IO scheduler - why?

Al Boldi wrote:
> Arjan van de Ven wrote:
>>>> Should there be a default scheduler per filesystem?  As some
>>>> filesystems may perform better/worse with one over another?
>>> It's currently perDevice, and should probably be extended to perMount.
>> Hi,
> 
> Hi!
> 
>> per mount is going to be "not funny". I assume the situation you are
>> aiming for is the "3 partitions on a disk, each wants its own elevator".
>> The way the kernel currently works is that IO requests the filesystem
>> does are first flattened into an IO for the entire device (eg the
>> partition mapping is done) and THEN the IO scheduler gets involved to
>> schedule the IO on a per disk basis.
> 
> IC.  That probably explains why concurrent io-procs have such a hard time 
> getting through to the disk.  They probably just hang in the flatting phase, 
> waiting for something to take care of their requests.
> 
flattening is just an addition in the cpu, that's just really boring and shouldn't be visible anywhere
performance wise
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ