[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1218701835-17327-4-git-send-email-aaronc@gelato.unsw.edu.au>
Date: Thu, 14 Aug 2008 18:17:15 +1000
From: Aaron Carroll <aaronc@...ato.unsw.edu.au>
To: jens.axboe@...cle.com
Cc: linux-kernel@...r.kernel.org
Subject: [PATCH 3/3] block: update documentation for deadline fifo_batch tunable
Update the description of fifo_batch to match the current implementation,
and include a description of how to tune it.
Signed-off-by: Aaron Carroll <aaronc@...ato.unsw.edu.au>
---
Documentation/block/deadline-iosched.txt | 14 ++++++++++----
1 files changed, 10 insertions(+), 4 deletions(-)
diff --git a/Documentation/block/deadline-iosched.txt b/Documentation/block/deadline-iosched.txt
index c23cab1..7257676 100644
--- a/Documentation/block/deadline-iosched.txt
+++ b/Documentation/block/deadline-iosched.txt
@@ -30,12 +30,18 @@ write_expire (in ms)
Similar to read_expire mentioned above, but for writes.
-fifo_batch
+fifo_batch (number of requests)
----------
-When a read request expires its deadline, we must move some requests from
-the sorted io scheduler list to the block device dispatch queue. fifo_batch
-controls how many requests we move.
+Requests are grouped into ``batches'' of a particular data direction (read or
+write) which are serviced in increasing sector order. To limit extra seeking,
+deadline expiries are only checked between batches. fifo_batch controls the
+maximum number of requests per batch.
+
+This parameter tunes the balance between per-request latency and aggregate
+throughput. When low latency is the primary concern, smaller is better (where
+a value of 1 yields first-come first-served behaviour). Increasing fifo_batch
+generally improves throughput, at the cost of latency variation.
writes_starved (number of dispatches)
--
1.5.4.5
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists