lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20250902080117.3658372-1-wangzhaolong@huaweicloud.com>
Date: Tue,  2 Sep 2025 16:01:17 +0800
From: Wang Zhaolong <wangzhaolong@...weicloud.com>
To: miquel.raynal@...tlin.com,
	richard@....at,
	vigneshr@...com
Cc: linux-mtd@...ts.infradead.org,
	linux-kernel@...r.kernel.org,
	chengzhihao1@...wei.com,
	yi.zhang@...wei.com,
	yangerkun@...wei.com
Subject: [PATCH] mtd: core: only increment ecc_stats.badblocks on confirmed good->bad transition

Repeatedly marking the same eraseblock bad inflates
mtd->ecc_stats.badblocks because mtd_block_markbad() unconditionally
increments the counter on success, while some implementations (e.g.
NAND) return 0 both when the block was already bad and when it has just
been marked[1].

Fix by probing the bad-block state before and after calling
->_block_markbad() (when _block_isbad is available) and only increment
the counter on a confirmed good->bad transition. If _block_isbad is not
implemented, be conservative and do not increment.

Keep the logic centralized in mtdcore; the markbad path is not a hot
path, so the extra IO is acceptable.

Link: https://lore.kernel.org/all/ef573188-9815-4a6b-bad1-3d8ff7c9b16f@huaweicloud.com/ [1]
Signed-off-by: Wang Zhaolong <wangzhaolong@...weicloud.com>
---
 drivers/mtd/mtdcore.c | 28 ++++++++++++++++++++++++----
 1 file changed, 24 insertions(+), 4 deletions(-)

diff --git a/drivers/mtd/mtdcore.c b/drivers/mtd/mtdcore.c
index 5ba9a741f5ac..a6d38da651e9 100644
--- a/drivers/mtd/mtdcore.c
+++ b/drivers/mtd/mtdcore.c
@@ -2338,10 +2338,12 @@ EXPORT_SYMBOL_GPL(mtd_block_isbad);
 
 int mtd_block_markbad(struct mtd_info *mtd, loff_t ofs)
 {
 	struct mtd_info *master = mtd_get_master(mtd);
 	int ret;
+	loff_t moffs;
+	int oldbad = -1;
 
 	if (!master->_block_markbad)
 		return -EOPNOTSUPP;
 	if (ofs < 0 || ofs >= mtd->size)
 		return -EINVAL;
@@ -2349,17 +2351,35 @@ int mtd_block_markbad(struct mtd_info *mtd, loff_t ofs)
 		return -EROFS;
 
 	if (mtd->flags & MTD_SLC_ON_MLC_EMULATION)
 		ofs = (loff_t)mtd_div_by_eb(ofs, mtd) * master->erasesize;
 
-	ret = master->_block_markbad(master, mtd_get_master_ofs(mtd, ofs));
+	moffs = mtd_get_master_ofs(mtd, ofs);
+
+	/* Pre-check: remember current state if available. */
+	if (master->_block_isbad)
+		oldbad = master->_block_isbad(master, moffs);
+
+	ret = master->_block_markbad(master, moffs);
 	if (ret)
 		return ret;
 
-	while (mtd->parent) {
-		mtd->ecc_stats.badblocks++;
-		mtd = mtd->parent;
+	/*
+	 * Post-check and bump stats only on a confirmed good->bad transition.
+	 * If _block_isbad is not implemented, be conservative and do not bump.
+	 */
+	if (master->_block_isbad) {
+		/* If it was already bad, nothing to do. */
+		if (oldbad > 0)
+			return 0;
+
+		if (master->_block_isbad(master, moffs) > 0) {
+			while (mtd->parent) {
+				mtd->ecc_stats.badblocks++;
+				mtd = mtd->parent;
+			}
+		}
 	}
 
 	return 0;
 }
 EXPORT_SYMBOL_GPL(mtd_block_markbad);
-- 
2.39.2


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ