[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20251205-winbond-v6-18-rc1-cont-read-v1-1-01bc48631c73@bootlin.com>
Date: Fri, 05 Dec 2025 20:38:52 +0100
From: Miquel Raynal <miquel.raynal@...tlin.com>
To: Mark Brown <broonie@...nel.org>, Richard Weinberger <richard@....at>,
Vignesh Raghavendra <vigneshr@...com>, Michael Walle <mwalle@...nel.org>
Cc: Tudor Ambarus <tudor.ambarus@...aro.org>,
Pratyush Yadav <pratyush@...nel.org>,
Thomas Petazzoni <thomas.petazzoni@...tlin.com>,
Steam Lin <STLin2@...bond.com>, Santhosh Kumar K <s-k6@...com>,
linux-spi@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-mtd@...ts.infradead.org, Miquel Raynal <miquel.raynal@...tlin.com>
Subject: [PATCH RFC 1/8] mtd: spinand: Drop a too strong limitation
Since continuous reads may sometimes not be able to go past an erase
block boundary, it has been decided not to attempt longer reads and if
the user request is bigger, it will be split across eraseblocks.
As these request will anyway be handled correctly, there is no reason to
filter out cases where we would go over a target or a die, so drop this
limitation which had a side effect: any request to read more than the
content of an eraseblock would simply not benefit from the continuous
read feature.
Signed-off-by: Miquel Raynal <miquel.raynal@...tlin.com>
---
drivers/mtd/nand/spi/core.c | 19 ++++++-------------
1 file changed, 6 insertions(+), 13 deletions(-)
diff --git a/drivers/mtd/nand/spi/core.c b/drivers/mtd/nand/spi/core.c
index 49ee03a7252b..f19150740979 100644
--- a/drivers/mtd/nand/spi/core.c
+++ b/drivers/mtd/nand/spi/core.c
@@ -788,6 +788,12 @@ static int spinand_mtd_continuous_page_read(struct mtd_info *mtd, loff_t from,
* Each data read must be a multiple of 4-bytes and full pages should be read;
* otherwise, the data output might get out of sequence from one read command
* to another.
+ *
+ * Continuous reads never cross LUN boundaries. Some devices don't
+ * support crossing planes boundaries. Some devices don't even support
+ * crossing blocks boundaries. The common case being to read through UBI,
+ * we will very rarely read two consequent blocks or more, so let's only enable
+ * continuous reads when reading within the same erase block.
*/
nanddev_io_for_each_block(nand, NAND_PAGE_READ, from, ops, &iter) {
ret = spinand_select_target(spinand, iter.req.pos.target);
@@ -870,19 +876,6 @@ static bool spinand_use_cont_read(struct mtd_info *mtd, loff_t from,
nanddev_offs_to_pos(nand, from, &start_pos);
nanddev_offs_to_pos(nand, from + ops->len - 1, &end_pos);
- /*
- * Continuous reads never cross LUN boundaries. Some devices don't
- * support crossing planes boundaries. Some devices don't even support
- * crossing blocks boundaries. The common case being to read through UBI,
- * we will very rarely read two consequent blocks or more, so it is safer
- * and easier (can be improved) to only enable continuous reads when
- * reading within the same erase block.
- */
- if (start_pos.target != end_pos.target ||
- start_pos.plane != end_pos.plane ||
- start_pos.eraseblock != end_pos.eraseblock)
- return false;
-
return start_pos.page < end_pos.page;
}
--
2.51.1
Powered by blists - more mailing lists