[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220312044315.7994-1-michael@allwinnertech.com>
Date: Sat, 12 Mar 2022 12:43:13 +0800
From: Michael Wu <michael@...winnertech.com>
To: ulf.hansson@...aro.org (maintainer:MULTIMEDIA CARD (MMC), SECURE
DIGITAL (SD) AND...,commit_signer:11/9=100%,authored:4/9=44%
,added_lines:26/61=43%,removed_lines:25/35=71%),
adrian.hunter@...el.com (commit_signer:3/9=33%,authored:4/9=44%
,added_lines:26/61=43%,removed_lines:25/35=71%),
avri.altman@....com (commit_signer:2/9=22%,authored:4/9=44%
,authored:2/9=22%,added_lines:26/61=43%,added_lines:16/61=26%
,removed_lines:25/35=71%),
beanhuo@...ron.com (commit_signer:1/9=11%,authored:4/9=44%
,authored:1/9=11%,added_lines:26/61=43%,removed_lines:25/35=71%),
porzio@...il.com (commit_signer:1/9=11%,authored:4/9=44%
,authored:1/9=11%,added_lines:26/61=43%,added_lines:4/61=7%
,removed_lines:25/35=71%,removed_lines:3/35=9%),
michael@...winnertech.com (authored:1/9=11%,added_lines:26/61=43%
,added_lines:14/61=23%,removed_lines:25/35=71%,removed_lines:6/35=17%)
Cc: Michael Wu <michael@...winnertech.com>,
Ulf Hansson <ulf.hansson@...aro.org>,
Adrian Hunter <adrian.hunter@...el.com>,
Avri Altman <avri.altman@....com>,
Luca Porzio <porzio@...il.com>,
lixiang <lixiang@...winnertech.com>,
Bean Huo <beanhuo@...ron.com>,
linux-mmc@...r.kernel.org (open list:MULTIMEDIA CARD (MMC), SECURE
DIGITAL (SD) AND...), linux-kernel@...r.kernel.org (open list)
Subject: [PATCH] mmc: block: enable cache-flushing when mmc cache is on
The mmc core enable cache on default. But it only enables cache-flushing
when host supports cmd23 and eMMC supports reliable write.
For hosts which do not support cmd23 or eMMCs which do not support
reliable write, the cache can not be flushed by `sync` command.
This may leads to cache data lost.
This patch enables cache-flushing as long as cache is enabled, no
matter host supports cmd23 and/or eMMC supports reliable write or not.
Signed-off-by: Michael Wu <michael@...winnertech.com>
---
drivers/mmc/core/block.c | 20 ++++++++++++++------
1 file changed, 14 insertions(+), 6 deletions(-)
diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
index 689eb9afeeed..1e508c079c1e 100644
--- a/drivers/mmc/core/block.c
+++ b/drivers/mmc/core/block.c
@@ -2279,6 +2279,8 @@ static struct mmc_blk_data *mmc_blk_alloc_req(struct mmc_card *card,
struct mmc_blk_data *md;
int devidx, ret;
char cap_str[10];
+ bool enable_cache = false;
+ bool enable_fua = false;
devidx = ida_simple_get(&mmc_blk_ida, 0, max_devices, GFP_KERNEL);
if (devidx < 0) {
@@ -2375,12 +2377,18 @@ static struct mmc_blk_data *mmc_blk_alloc_req(struct mmc_card *card,
md->flags |= MMC_BLK_CMD23;
}
- if (mmc_card_mmc(card) &&
- md->flags & MMC_BLK_CMD23 &&
- ((card->ext_csd.rel_param & EXT_CSD_WR_REL_PARAM_EN) ||
- card->ext_csd.rel_sectors)) {
- md->flags |= MMC_BLK_REL_WR;
- blk_queue_write_cache(md->queue.queue, true, true);
+ if (mmc_card_mmc(card)) {
+ if (md->flags & MMC_BLK_CMD23 &&
+ ((card->ext_csd.rel_param & EXT_CSD_WR_REL_PARAM_EN) ||
+ card->ext_csd.rel_sectors)) {
+ md->flags |= MMC_BLK_REL_WR;
+ enable_fua = true;
+ }
+
+ if (mmc_cache_enabled(card->host))
+ enable_cache = true;
+
+ blk_queue_write_cache(md->queue.queue, enable_cache, enable_fua);
}
string_get_size((u64)size, 512, STRING_UNITS_2,
--
2.29.0
Powered by blists - more mailing lists