[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1243236668-3398-22-git-send-email-jens.axboe@oracle.com>
Date: Mon, 25 May 2009 09:31:04 +0200
From: Jens Axboe <jens.axboe@...cle.com>
To: linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org
Cc: chris.mason@...cle.com, david@...morbit.com, hch@...radead.org,
akpm@...ux-foundation.org, jack@...e.cz,
yanmin_zhang@...ux.intel.com, Jens Axboe <jens.axboe@...cle.com>
Subject: [PATCH 11/13] block: disallow merging of read-ahead bits into normal request
For SSD type devices, the request latency is really low. So for those
types of devices, we may not want to merge the read part of a request into
the read-ahead request that is also generates.
Add code to mpage.c to properly propagate read vs reada information to
the block layer and let the elevator core check and prevent such merges.
Signed-off-by: Jens Axboe <jens.axboe@...cle.com>
---
block/elevator.c | 7 +++++++
fs/mpage.c | 30 ++++++++++++++++++++++++------
2 files changed, 31 insertions(+), 6 deletions(-)
diff --git a/block/elevator.c b/block/elevator.c
index 6261b24..17cfaa2 100644
--- a/block/elevator.c
+++ b/block/elevator.c
@@ -67,6 +67,13 @@ static int elv_iosched_allow_merge(struct request *rq, struct bio *bio)
{
struct request_queue *q = rq->q;
+ /*
+ * Disallow merge of a read-ahead bio into a normal request for SSD
+ */
+ if (blk_queue_nonrot(q) &&
+ bio_rw_ahead(bio) && !(rq->cmd_flags & REQ_FAILFAST_DEV))
+ return 0;
+
if (q->elv_ops.elevator_allow_merge_fn)
return elv_call_allow_merge_fn(q, rq, bio);
diff --git a/fs/mpage.c b/fs/mpage.c
index 680ba60..d02cf51 100644
--- a/fs/mpage.c
+++ b/fs/mpage.c
@@ -180,11 +180,18 @@ do_mpage_readpage(struct bio *bio, struct page *page, unsigned nr_pages,
unsigned page_block;
unsigned first_hole = blocks_per_page;
struct block_device *bdev = NULL;
- int length;
+ int length, rw;
int fully_mapped = 1;
unsigned nblocks;
unsigned relative_block;
+ /*
+ * If there's some read-ahead in this range, be sure to tell
+ * the block layer about it. We start off as a READ, then switch
+ * to READA if we spot the read-ahead marker on the page.
+ */
+ rw = READ;
+
if (page_has_buffers(page))
goto confused;
@@ -289,7 +296,7 @@ do_mpage_readpage(struct bio *bio, struct page *page, unsigned nr_pages,
* This page will go to BIO. Do we need to send this BIO off first?
*/
if (bio && (*last_block_in_bio != blocks[0] - 1))
- bio = mpage_bio_submit(READ, bio);
+ bio = mpage_bio_submit(rw, bio);
alloc_new:
if (bio == NULL) {
@@ -301,8 +308,19 @@ alloc_new:
}
length = first_hole << blkbits;
- if (bio_add_page(bio, page, length, 0) < length) {
- bio = mpage_bio_submit(READ, bio);
+
+ /*
+ * If this is an SSD, don't merge the read-ahead part of the IO
+ * with the actual request. We want the interesting part to complete
+ * as quickly as possible.
+ */
+ if (blk_queue_nonrot(bdev_get_queue(bdev)) &&
+ bio->bi_size && PageReadahead(page)) {
+ bio = mpage_bio_submit(rw, bio);
+ rw = READA;
+ goto alloc_new;
+ } else if (bio_add_page(bio, page, length, 0) < length) {
+ bio = mpage_bio_submit(rw, bio);
goto alloc_new;
}
@@ -310,7 +328,7 @@ alloc_new:
nblocks = map_bh->b_size >> blkbits;
if ((buffer_boundary(map_bh) && relative_block == nblocks) ||
(first_hole != blocks_per_page))
- bio = mpage_bio_submit(READ, bio);
+ bio = mpage_bio_submit(rw, bio);
else
*last_block_in_bio = blocks[blocks_per_page - 1];
out:
@@ -318,7 +336,7 @@ out:
confused:
if (bio)
- bio = mpage_bio_submit(READ, bio);
+ bio = mpage_bio_submit(rw, bio);
if (!PageUptodate(page))
block_read_full_page(page, get_block);
else
--
1.6.3.rc0.1.gf800
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists