[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210111130051.376062186@linuxfoundation.org>
Date: Mon, 11 Jan 2021 14:01:23 +0100
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Alan Stern <stern@...land.harvard.edu>,
Stanley Chu <stanley.chu@...iatek.com>,
Ming Lei <ming.lei@...hat.com>,
"Rafael J. Wysocki" <rafael.j.wysocki@...el.com>,
Can Guo <cang@...eaurora.org>, Christoph Hellwig <hch@....de>,
Hannes Reinecke <hare@...e.de>, Jens Axboe <axboe@...nel.dk>,
Bart Van Assche <bvanassche@....org>,
"Martin K. Petersen" <martin.petersen@...cle.com>,
Sasha Levin <sashal@...nel.org>
Subject: [PATCH 5.10 059/145] scsi: block: Introduce BLK_MQ_REQ_PM
From: Bart Van Assche <bvanassche@....org>
[ Upstream commit 0854bcdcdec26aecdc92c303816f349ee1fba2bc ]
Introduce the BLK_MQ_REQ_PM flag. This flag makes the request allocation
functions set RQF_PM. This is the first step towards removing
BLK_MQ_REQ_PREEMPT.
Link: https://lore.kernel.org/r/20201209052951.16136-3-bvanassche@acm.org
Cc: Alan Stern <stern@...land.harvard.edu>
Cc: Stanley Chu <stanley.chu@...iatek.com>
Cc: Ming Lei <ming.lei@...hat.com>
Cc: Rafael J. Wysocki <rafael.j.wysocki@...el.com>
Cc: Can Guo <cang@...eaurora.org>
Reviewed-by: Christoph Hellwig <hch@....de>
Reviewed-by: Hannes Reinecke <hare@...e.de>
Reviewed-by: Jens Axboe <axboe@...nel.dk>
Reviewed-by: Can Guo <cang@...eaurora.org>
Signed-off-by: Bart Van Assche <bvanassche@....org>
Signed-off-by: Martin K. Petersen <martin.petersen@...cle.com>
Signed-off-by: Sasha Levin <sashal@...nel.org>
---
block/blk-core.c | 7 ++++---
block/blk-mq.c | 2 ++
include/linux/blk-mq.h | 2 ++
3 files changed, 8 insertions(+), 3 deletions(-)
diff --git a/block/blk-core.c b/block/blk-core.c
index 2db8bda43b6e6..10696f9fb6ac6 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -424,11 +424,11 @@ EXPORT_SYMBOL(blk_cleanup_queue);
/**
* blk_queue_enter() - try to increase q->q_usage_counter
* @q: request queue pointer
- * @flags: BLK_MQ_REQ_NOWAIT and/or BLK_MQ_REQ_PREEMPT
+ * @flags: BLK_MQ_REQ_NOWAIT, BLK_MQ_REQ_PM and/or BLK_MQ_REQ_PREEMPT
*/
int blk_queue_enter(struct request_queue *q, blk_mq_req_flags_t flags)
{
- const bool pm = flags & BLK_MQ_REQ_PREEMPT;
+ const bool pm = flags & (BLK_MQ_REQ_PM | BLK_MQ_REQ_PREEMPT);
while (true) {
bool success = false;
@@ -630,7 +630,8 @@ struct request *blk_get_request(struct request_queue *q, unsigned int op,
struct request *req;
WARN_ON_ONCE(op & REQ_NOWAIT);
- WARN_ON_ONCE(flags & ~(BLK_MQ_REQ_NOWAIT | BLK_MQ_REQ_PREEMPT));
+ WARN_ON_ONCE(flags & ~(BLK_MQ_REQ_NOWAIT | BLK_MQ_REQ_PM |
+ BLK_MQ_REQ_PREEMPT));
req = blk_mq_alloc_request(q, op, flags);
if (!IS_ERR(req) && q->mq_ops->initialize_rq_fn)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 55bcee5dc0320..0072ffa50b46e 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -292,6 +292,8 @@ static struct request *blk_mq_rq_ctx_init(struct blk_mq_alloc_data *data,
rq->mq_hctx = data->hctx;
rq->rq_flags = 0;
rq->cmd_flags = data->cmd_flags;
+ if (data->flags & BLK_MQ_REQ_PM)
+ rq->rq_flags |= RQF_PM;
if (data->flags & BLK_MQ_REQ_PREEMPT)
rq->rq_flags |= RQF_PREEMPT;
if (blk_queue_io_stat(data->q))
diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
index 794b2a33a2c36..c9ecfd8b03381 100644
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -446,6 +446,8 @@ enum {
BLK_MQ_REQ_NOWAIT = (__force blk_mq_req_flags_t)(1 << 0),
/* allocate from reserved pool */
BLK_MQ_REQ_RESERVED = (__force blk_mq_req_flags_t)(1 << 1),
+ /* set RQF_PM */
+ BLK_MQ_REQ_PM = (__force blk_mq_req_flags_t)(1 << 2),
/* set RQF_PREEMPT */
BLK_MQ_REQ_PREEMPT = (__force blk_mq_req_flags_t)(1 << 3),
};
--
2.27.0
Powered by blists - more mailing lists