[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1654770559-101375-2-git-send-email-john.garry@huawei.com>
Date: Thu, 9 Jun 2022 18:29:02 +0800
From: John Garry <john.garry@...wei.com>
To: <axboe@...nel.dk>, <damien.lemoal@...nsource.wdc.com>,
<jejb@...ux.ibm.com>, <martin.petersen@...cle.com>,
<brking@...ibm.com>, <hare@...e.de>, <hch@....de>
CC: <linux-block@...r.kernel.org>, <linux-ide@...r.kernel.org>,
<linux-kernel@...r.kernel.org>, <linux-scsi@...r.kernel.org>,
<chenxiang66@...ilicon.com>, John Garry <john.garry@...wei.com>
Subject: [PATCH RFC v2 01/18] blk-mq: Add a flag for reserved requests
Add a flag for reserved requests so that drivers may know this for any
special handling.
The 'reserved' argument in blk_mq_ops.timeout callback could now be
replaced by using this flag.
Signed-off-by: John Garry <john.garry@...wei.com>
---
block/blk-mq.c | 6 ++++++
include/linux/blk-mq.h | 6 ++++++
2 files changed, 12 insertions(+)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index e9bf950983c7..23f2eafb09ca 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -474,6 +474,9 @@ static struct request *__blk_mq_alloc_requests(struct blk_mq_alloc_data *data)
if (!(data->rq_flags & RQF_ELV))
blk_mq_tag_busy(data->hctx);
+ if (data->flags & BLK_MQ_REQ_RESERVED)
+ data->rq_flags |= RQF_RESV;
+
/*
* Try batched alloc if we want more than 1 tag.
*/
@@ -586,6 +589,9 @@ struct request *blk_mq_alloc_request_hctx(struct request_queue *q,
else
data.rq_flags |= RQF_ELV;
+ if (flags & BLK_MQ_REQ_RESERVED)
+ data.rq_flags |= RQF_RESV;
+
ret = -EWOULDBLOCK;
tag = blk_mq_get_tag(&data);
if (tag == BLK_MQ_NO_TAG)
diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
index e2d9daf7e8dd..6d81fe10e850 100644
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -57,6 +57,7 @@ typedef __u32 __bitwise req_flags_t;
#define RQF_TIMED_OUT ((__force req_flags_t)(1 << 21))
/* queue has elevator attached */
#define RQF_ELV ((__force req_flags_t)(1 << 22))
+#define RQF_RESV ((__force req_flags_t)(1 << 23))
/* flags that prevent us from merging requests: */
#define RQF_NOMERGE_FLAGS \
@@ -823,6 +824,11 @@ static inline bool blk_mq_need_time_stamp(struct request *rq)
return (rq->rq_flags & (RQF_IO_STAT | RQF_STATS | RQF_ELV));
}
+static inline bool blk_mq_is_reserved_rq(struct request *rq)
+{
+ return rq->rq_flags & RQF_RESV;
+}
+
/*
* Batched completions only work when there is no I/O error and no special
* ->end_io handler.
--
2.26.2
Powered by blists - more mailing lists