[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <cebfe83b-138a-4bca-c37a-bcb5b25f580d@intel.com>
Date: Fri, 18 Nov 2022 13:34:25 +0200
From: Adrian Hunter <adrian.hunter@...el.com>
To: Christian Löhle <CLoehle@...erstone.com>,
"axboe@...nel.dk" <axboe@...nel.dk>,
"ulf.hansson@...aro.org" <ulf.hansson@...aro.org>,
"linux-mmc@...r.kernel.org" <linux-mmc@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-block@...r.kernel.org" <linux-block@...r.kernel.org>
Cc: Avri Altman <Avri.Altman@....com>,
"vincent.whitchurch@...s.com" <vincent.whitchurch@...s.com>
Subject: Re: [PATCH 3/3] mmc: block: Requeue on block size restrictions
On 26/10/22 10:30, Christian Löhle wrote:
> The block layer does not conform to all our sector count restrictions, so
> requeue in case we had to modify the number of blocks sent instead of
> going through the normal completion.
>
> Note that the normal completion used before does not lead to a bug,
> this change is just the nicer thing to do.
Can you elaborate on why it is "nicer"?
> An example of such a restriction is max_blk_count = 1 and 512 blksz,
> but the block layer continues to use requests of size PAGE_SIZE.
>
> Signed-off-by: Christian Loehle <cloehle@...erstone.com>
> ---
> drivers/mmc/core/block.c | 12 +++++++++---
> 1 file changed, 9 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
> index 54cd009aee50..c434d3964880 100644
> --- a/drivers/mmc/core/block.c
> +++ b/drivers/mmc/core/block.c
> @@ -1519,8 +1519,10 @@ static void mmc_blk_cqe_req_done(struct mmc_request *mrq)
> /*
> * Block layer timeouts race with completions which means the normal
> * completion path cannot be used during recovery.
> + * Also do not use it if we had to modify the block count to satisfy
> + * host controller needs.
> */
> - if (mq->in_recovery)
> + if (mq->in_recovery || mrq->data->blocks != blk_rq_sectors(req))
> mmc_blk_cqe_complete_rq(mq, req);
> else if (likely(!blk_should_fake_timeout(req->q)))
> blk_mq_complete_request(req);
> @@ -2051,8 +2053,10 @@ static void mmc_blk_hsq_req_done(struct mmc_request *mrq)
> /*
> * Block layer timeouts race with completions which means the normal
> * completion path cannot be used during recovery.
> + * Also do not use it if we had to modify the block count to satisfy
> + * host controller needs.
> */
> - if (mq->in_recovery)
> + if (mq->in_recovery || mrq->data->blocks != blk_rq_sectors(req))
> mmc_blk_cqe_complete_rq(mq, req);
> else if (likely(!blk_should_fake_timeout(req->q)))
> blk_mq_complete_request(req);
> @@ -2115,8 +2119,10 @@ static void mmc_blk_mq_post_req(struct mmc_queue *mq, struct request *req,
> /*
> * Block layer timeouts race with completions which means the normal
> * completion path cannot be used during recovery.
> + * Also do not use it if we had to modify the block count to satisfy
> + * host controller needs.
> */
> - if (mq->in_recovery) {
> + if (mq->in_recovery || mrq->data->blocks != blk_rq_sectors(req)) {
> mmc_blk_mq_complete_rq(mq, req);
> } else if (likely(!blk_should_fake_timeout(req->q))) {
> if (can_sleep)
Powered by blists - more mailing lists