[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220523165812.360622432@linuxfoundation.org>
Date: Mon, 23 May 2022 19:05:27 +0200
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org,
Oleksandr Natalenko <oleksandr@...alenko.name>,
Ming Lei <ming.lei@...hat.com>, Jens Axboe <axboe@...nel.dk>,
Gwendal Grignou <gwendal@...omium.org>
Subject: [PATCH 5.4 60/68] block: return ELEVATOR_DISCARD_MERGE if possible
From: Ming Lei <ming.lei@...hat.com>
commit 866663b7b52d2da267b28e12eed89ee781b8fed1 upstream.
When merging one bio to request, if they are discard IO and the queue
supports multi-range discard, we need to return ELEVATOR_DISCARD_MERGE
because both block core and related drivers(nvme, virtio-blk) doesn't
handle mixed discard io merge(traditional IO merge together with
discard merge) well.
Fix the issue by returning ELEVATOR_DISCARD_MERGE in this situation,
so both blk-mq and drivers just need to handle multi-range discard.
Reported-by: Oleksandr Natalenko <oleksandr@...alenko.name>
Signed-off-by: Ming Lei <ming.lei@...hat.com>
Tested-by: Oleksandr Natalenko <oleksandr@...alenko.name>
Fixes: 2705dfb20947 ("block: fix discard request merge")
Link: https://lore.kernel.org/r/20210729034226.1591070-1-ming.lei@redhat.com
Signed-off-by: Jens Axboe <axboe@...nel.dk>
Signed-off-by: Gwendal Grignou <gwendal@...omium.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
block/bfq-iosched.c | 3 +++
block/blk-merge.c | 15 ---------------
block/elevator.c | 3 +++
block/mq-deadline.c | 2 ++
include/linux/blkdev.h | 16 ++++++++++++++++
5 files changed, 24 insertions(+), 15 deletions(-)
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -2251,6 +2251,9 @@ static int bfq_request_merge(struct requ
__rq = bfq_find_rq_fmerge(bfqd, bio, q);
if (__rq && elv_bio_merge_ok(__rq, bio)) {
*req = __rq;
+
+ if (blk_discard_mergable(__rq))
+ return ELEVATOR_DISCARD_MERGE;
return ELEVATOR_FRONT_MERGE;
}
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -721,21 +721,6 @@ static void blk_account_io_merge(struct
part_stat_unlock();
}
}
-/*
- * Two cases of handling DISCARD merge:
- * If max_discard_segments > 1, the driver takes every bio
- * as a range and send them to controller together. The ranges
- * needn't to be contiguous.
- * Otherwise, the bios/requests will be handled as same as
- * others which should be contiguous.
- */
-static inline bool blk_discard_mergable(struct request *req)
-{
- if (req_op(req) == REQ_OP_DISCARD &&
- queue_max_discard_segments(req->q) > 1)
- return true;
- return false;
-}
static enum elv_merge blk_try_req_merge(struct request *req,
struct request *next)
--- a/block/elevator.c
+++ b/block/elevator.c
@@ -337,6 +337,9 @@ enum elv_merge elv_merge(struct request_
__rq = elv_rqhash_find(q, bio->bi_iter.bi_sector);
if (__rq && elv_bio_merge_ok(__rq, bio)) {
*req = __rq;
+
+ if (blk_discard_mergable(__rq))
+ return ELEVATOR_DISCARD_MERGE;
return ELEVATOR_BACK_MERGE;
}
--- a/block/mq-deadline.c
+++ b/block/mq-deadline.c
@@ -452,6 +452,8 @@ static int dd_request_merge(struct reque
if (elv_bio_merge_ok(__rq, bio)) {
*rq = __rq;
+ if (blk_discard_mergable(__rq))
+ return ELEVATOR_DISCARD_MERGE;
return ELEVATOR_FRONT_MERGE;
}
}
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -1409,6 +1409,22 @@ static inline int queue_limit_discard_al
return offset << SECTOR_SHIFT;
}
+/*
+ * Two cases of handling DISCARD merge:
+ * If max_discard_segments > 1, the driver takes every bio
+ * as a range and send them to controller together. The ranges
+ * needn't to be contiguous.
+ * Otherwise, the bios/requests will be handled as same as
+ * others which should be contiguous.
+ */
+static inline bool blk_discard_mergable(struct request *req)
+{
+ if (req_op(req) == REQ_OP_DISCARD &&
+ queue_max_discard_segments(req->q) > 1)
+ return true;
+ return false;
+}
+
static inline int bdev_discard_alignment(struct block_device *bdev)
{
struct request_queue *q = bdev_get_queue(bdev);
Powered by blists - more mailing lists