lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 30 Jan 2012 15:09:12 +0800
From:	Shaohua Li <shaohua.li@...el.com>
To:	axboe@...nel.dk
Cc:	linux-kernel@...r.kernel.org, vgoyal@...hat.com,
	david@...morbit.com, jack@...e.cz, zhu.yanhai@...il.com,
	namhyung.kim@....com
Subject: Re: [patch v2 0/8]block: An IOPS based ioscheduler

On Mon, 2012-01-30 at 15:02 +0800, Shaohua Li wrote:
> An IOPS based I/O scheduler
> 
> Flash based storage has some different characteristics against rotate disk.
> 1. no I/O seek.
> 2. read and write I/O cost usually is much different.
> 3. Time which a request takes depends on request size.
> 4. High throughput and IOPS, low latency.
> 
> CFQ iosched does well for rotate disk, for example fair dispatching, idle
> for sequential read. It also has optimization for flash based storage (for
> item 1 above), but overall it's not designed for flash based storage. It's
> a slice based algorithm. Since flash based storage request cost is very
> low, and drive has big queue_depth is quite popular now which makes
> dispatching cost even lower, CFQ's slice accounting (jiffy based)
> doesn't work well. CFQ doesn't consider above item 2 & 3.
> 
> FIOPS (Fair IOPS) ioscheduler is trying to fix the gaps. It's IOPS based, so
> only targets for drive without I/O seek. It's quite similar like CFQ, but
> the dispatch decision is made according to IOPS instead of slice.
> 
> To illustrate the design goals, let's compare Noop and CFQ:
> Noop: best throughput; No fairness and high latency for sync.
> CFQ: lower throughput in some cases; fairness and low latency for sync.
> CFQ throughput is slow sometimes because it doesn't drive deep queue depth.
> FIOPS adopts some merits of CFQ, for example, fairness and bias sync workload.
> And it will be faster than CFQ in general.
> 
> Note, if workload iodepth is low, there is no way to maintain fairness without
> performance sacrifice. Neither with CFQ. In such case, FIOPS will choose to not
> lose performance because flash based storage is usually very fast and expensive,
> performance is more important.
> 
> The algorithm is simple. Drive has a service tree, and each task lives in
> the tree. The key into the tree is called vios (virtual I/O). Every request
> has vios, which is calculated according to its ioprio, request size and so
> on. Task's vios is the sum of vios of all requests it dispatches. FIOPS
> always selects task with minimum vios in the service tree and let the task
> dispatch request. The dispatched request's vios is then added to the task's
> vios and the task is repositioned in the sevice tree.
> 
> Benchmarks results:
> SSD I'm using: max throughput read: 250m/s; write: 80m/s. max IOPS for 4k
> request read 40k/s; write 20k/s
> Latency and fairness tests are done in a desktop with one SSD and kernel
> parameter mem=1G. I'll compare noop, cfq and fiops in such workload. The test
> script and result is attached. 
attached is the fio scripts I used for latency and fairness test and
data.

Download attachment "fiops-data.tgz" of type "application/x-compressed-tar" (14188 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ