[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241119140112.790720-6-alexander.usyskin@intel.com>
Date: Tue, 19 Nov 2024 16:01:07 +0200
From: Alexander Usyskin <alexander.usyskin@...el.com>
To: Miquel Raynal <miquel.raynal@...tlin.com>,
Richard Weinberger <richard@....at>,
Vignesh Raghavendra <vigneshr@...com>,
Lucas De Marchi <lucas.demarchi@...el.com>,
Thomas Hellström <thomas.hellstrom@...ux.intel.com>,
Rodrigo Vivi <rodrigo.vivi@...el.com>,
Maarten Lankhorst <maarten.lankhorst@...ux.intel.com>,
Maxime Ripard <mripard@...nel.org>,
Thomas Zimmermann <tzimmermann@...e.de>,
David Airlie <airlied@...il.com>,
Simona Vetter <simona@...ll.ch>,
Jani Nikula <jani.nikula@...ux.intel.com>,
Joonas Lahtinen <joonas.lahtinen@...ux.intel.com>,
Tvrtko Ursulin <tursulin@...ulin.net>
Cc: Oren Weil <oren.jer.weil@...el.com>,
linux-mtd@...ts.infradead.org,
dri-devel@...ts.freedesktop.org,
intel-gfx@...ts.freedesktop.org,
linux-kernel@...r.kernel.org,
Alexander Usyskin <alexander.usyskin@...el.com>
Subject: [PATCH v3 05/10] mtd: intel-dg: align 64bit read and write
GSC NVM controller HW errors on quad access overlapping 1K border.
Align 64bit read and write to avoid readq/writeq over 1K border.
Acked-by: Miquel Raynal <miquel.raynal@...tlin.com>
Signed-off-by: Alexander Usyskin <alexander.usyskin@...el.com>
---
drivers/mtd/devices/mtd-intel-dg.c | 35 ++++++++++++++++++++++++++++++
1 file changed, 35 insertions(+)
diff --git a/drivers/mtd/devices/mtd-intel-dg.c b/drivers/mtd/devices/mtd-intel-dg.c
index 76ef7198fff8..230bf444b7fe 100644
--- a/drivers/mtd/devices/mtd-intel-dg.c
+++ b/drivers/mtd/devices/mtd-intel-dg.c
@@ -238,6 +238,24 @@ static ssize_t idg_write(struct intel_dg_nvm *nvm, u8 region,
len_s -= to_shift;
}
+ if (!IS_ALIGNED(to, sizeof(u64)) &&
+ ((to ^ (to + len_s)) & GENMASK(31, 10))) {
+ /*
+ * Workaround reads/writes across 1k-aligned addresses
+ * (start u32 before 1k, end u32 after)
+ * as this fails on hardware.
+ */
+ u32 data;
+
+ memcpy(&data, &buf[0], sizeof(u32));
+ idg_nvm_write32(nvm, to, data);
+ if (idg_nvm_error(nvm))
+ return -EIO;
+ buf += sizeof(u32);
+ to += sizeof(u32);
+ len_s -= sizeof(u32);
+ }
+
len8 = ALIGN_DOWN(len_s, sizeof(u64));
for (i = 0; i < len8; i += sizeof(u64)) {
u64 data;
@@ -295,6 +313,23 @@ static ssize_t idg_read(struct intel_dg_nvm *nvm, u8 region,
from += from_shift;
}
+ if (!IS_ALIGNED(from, sizeof(u64)) &&
+ ((from ^ (from + len_s)) & GENMASK(31, 10))) {
+ /*
+ * Workaround reads/writes across 1k-aligned addresses
+ * (start u32 before 1k, end u32 after)
+ * as this fails on hardware.
+ */
+ u32 data = idg_nvm_read32(nvm, from);
+
+ if (idg_nvm_error(nvm))
+ return -EIO;
+ memcpy(&buf[0], &data, sizeof(data));
+ len_s -= sizeof(u32);
+ buf += sizeof(u32);
+ from += sizeof(u32);
+ }
+
len8 = ALIGN_DOWN(len_s, sizeof(u64));
for (i = 0; i < len8; i += sizeof(u64)) {
u64 data = idg_nvm_read64(nvm, from + i);
--
2.43.0
Powered by blists - more mailing lists