[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAPDyKFqAdXcVO9=26pTbQyzYprax9-_i0T1XMeXTWAYOMAaovw@mail.gmail.com>
Date: Mon, 19 Apr 2021 14:39:08 +0200
From: Ulf Hansson <ulf.hansson@...aro.org>
To: Avri Altman <avri.altman@....com>
Cc: linux-mmc <linux-mmc@...r.kernel.org>,
Adrian Hunter <adrian.hunter@...el.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Brendan Peter <bpeter@...x.com>
Subject: Re: [PATCH v2 1/2] mmc: block: Issue flush only if allowed
On Sun, 18 Apr 2021 at 08:00, Avri Altman <avri.altman@....com> wrote:
>
> The cache may be flushed to the nonvolatile storage by writing to
> FLUSH_CACHE byte (EXT_CSD byte [32]). When in command queueing mode, the
> cache may be flushed by issuing a CMDQ_TASK_ DEV_MGMT (CMD48) with a
> FLUSH_CACHE op-code. Either way, verify that The cache function is
> turned ON before doing so.
Avri, thanks for your patch. Overall this looks good to me.
However things are becoming more and more messy in these layers of the
mmc core. In particular, I would like us to take advantage of the
bus_ops callbacks, when eMMC and/or SD specific features need
different implementations.
I have posted a patch [1], that moves the eMMC cache flushing into a
bus_ops callback. Would you mind rebasing this series on top of that?
Kind regards
Uffe
[1]
https://patchwork.kernel.org/project/linux-mmc/patch/20210419122943.68234-1-ulf.hansson@linaro.org/
>
> fixes: 1e8e55b67030 (mmc: block: Add CQE support)
>
> Reported-by: Brendan Peter <bpeter@...x.com>
> Tested-by: Brendan Peter <bpeter@...x.com>
> Signed-off-by: Avri Altman <avri.altman@....com>
> ---
> drivers/mmc/core/block.c | 7 +++++++
> drivers/mmc/core/mmc_ops.c | 4 +---
> drivers/mmc/core/mmc_ops.h | 5 +++++
> 3 files changed, 13 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
> index fe5892d30778..6800feb70c92 100644
> --- a/drivers/mmc/core/block.c
> +++ b/drivers/mmc/core/block.c
> @@ -1476,6 +1476,11 @@ static int mmc_blk_cqe_issue_flush(struct mmc_queue *mq, struct request *req)
> struct mmc_queue_req *mqrq = req_to_mmc_queue_req(req);
> struct mmc_request *mrq = mmc_blk_cqe_prep_dcmd(mqrq, req);
>
> + if (mmc_card_mmc(mq->card) && !mmc_flush_allowed(mq->card)) {
> + blk_mq_end_request(req, BLK_STS_OK);
> + return -EPERM;
> + }
> +
> mrq->cmd->opcode = MMC_SWITCH;
> mrq->cmd->arg = (MMC_SWITCH_MODE_WRITE_BYTE << 24) |
> (EXT_CSD_FLUSH_CACHE << 16) |
> @@ -2226,6 +2231,8 @@ enum mmc_issued mmc_blk_mq_issue_rq(struct mmc_queue *mq, struct request *req)
> switch (req_op(req)) {
> case REQ_OP_FLUSH:
> ret = mmc_blk_cqe_issue_flush(mq, req);
> + if (ret == -EPERM)
> + return MMC_REQ_FINISHED;
> break;
> case REQ_OP_READ:
> case REQ_OP_WRITE:
> diff --git a/drivers/mmc/core/mmc_ops.c b/drivers/mmc/core/mmc_ops.c
> index f413474f0f80..9c2a665be034 100644
> --- a/drivers/mmc/core/mmc_ops.c
> +++ b/drivers/mmc/core/mmc_ops.c
> @@ -967,9 +967,7 @@ int mmc_flush_cache(struct mmc_card *card)
> {
> int err = 0;
>
> - if (mmc_card_mmc(card) &&
> - (card->ext_csd.cache_size > 0) &&
> - (card->ext_csd.cache_ctrl & 1)) {
> + if (mmc_card_mmc(card) && mmc_flush_allowed(card)) {
> err = mmc_switch(card, EXT_CSD_CMD_SET_NORMAL,
> EXT_CSD_FLUSH_CACHE, 1,
> MMC_CACHE_FLUSH_TIMEOUT_MS);
> diff --git a/drivers/mmc/core/mmc_ops.h b/drivers/mmc/core/mmc_ops.h
> index 632009260e51..bf2b315addd7 100644
> --- a/drivers/mmc/core/mmc_ops.h
> +++ b/drivers/mmc/core/mmc_ops.h
> @@ -19,6 +19,11 @@ enum mmc_busy_cmd {
> struct mmc_host;
> struct mmc_card;
>
> +static inline bool mmc_flush_allowed(struct mmc_card *card)
> +{
> + return card->ext_csd.cache_size > 0 && (card->ext_csd.cache_ctrl & 1);
> +}
> +
> int mmc_select_card(struct mmc_card *card);
> int mmc_deselect_cards(struct mmc_host *host);
> int mmc_set_dsr(struct mmc_host *host);
> --
> 2.25.1
>
Powered by blists - more mailing lists