[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241022104119.3149051-6-alexander.usyskin@intel.com>
Date: Tue, 22 Oct 2024 13:41:14 +0300
From: Alexander Usyskin <alexander.usyskin@...el.com>
To: Miquel Raynal <miquel.raynal@...tlin.com>,
Richard Weinberger <richard@....at>,
Vignesh Raghavendra <vigneshr@...com>,
Lucas De Marchi <lucas.demarchi@...el.com>,
Thomas Hellström <thomas.hellstrom@...ux.intel.com>,
Rodrigo Vivi <rodrigo.vivi@...el.com>,
Maarten Lankhorst <maarten.lankhorst@...ux.intel.com>,
Maxime Ripard <mripard@...nel.org>,
Thomas Zimmermann <tzimmermann@...e.de>,
David Airlie <airlied@...il.com>,
Simona Vetter <simona@...ll.ch>,
Jani Nikula <jani.nikula@...ux.intel.com>,
Joonas Lahtinen <joonas.lahtinen@...ux.intel.com>,
Tvrtko Ursulin <tursulin@...ulin.net>
Cc: Oren Weil <oren.jer.weil@...el.com>,
linux-mtd@...ts.infradead.org,
dri-devel@...ts.freedesktop.org,
intel-gfx@...ts.freedesktop.org,
linux-kernel@...r.kernel.org,
Alexander Usyskin <alexander.usyskin@...el.com>
Subject: [PATCH 05/10] mtd: intel-dg: align 64bit read and write
GSC NVM controller HW errors on quad access overlapping 1K border.
Align 64bit read and write to avoid readq/writeq over 1K border.
Signed-off-by: Alexander Usyskin <alexander.usyskin@...el.com>
---
drivers/mtd/devices/mtd-intel-dg.c | 35 ++++++++++++++++++++++++++++++
1 file changed, 35 insertions(+)
diff --git a/drivers/mtd/devices/mtd-intel-dg.c b/drivers/mtd/devices/mtd-intel-dg.c
index 2efcb4d64ebf..4951438e8faf 100644
--- a/drivers/mtd/devices/mtd-intel-dg.c
+++ b/drivers/mtd/devices/mtd-intel-dg.c
@@ -239,6 +239,24 @@ static ssize_t idg_write(struct intel_dg_nvm *nvm, u8 region,
len_s -= to_shift;
}
+ if (!IS_ALIGNED(to, sizeof(u64)) &&
+ ((to ^ (to + len_s)) & GENMASK(31, 10))) {
+ /*
+ * Workaround reads/writes across 1k-aligned addresses
+ * (start u32 before 1k, end u32 after)
+ * as this fails on hardware.
+ */
+ u32 data;
+
+ memcpy(&data, &buf[0], sizeof(u32));
+ idg_nvm_write32(nvm, to, data);
+ if (idg_nvm_error(nvm))
+ return -EIO;
+ buf += sizeof(u32);
+ to += sizeof(u32);
+ len_s -= sizeof(u32);
+ }
+
len8 = ALIGN_DOWN(len_s, sizeof(u64));
for (i = 0; i < len8; i += sizeof(u64)) {
u64 data;
@@ -297,6 +315,23 @@ static ssize_t idg_read(struct intel_dg_nvm *nvm, u8 region,
from += from_shift;
}
+ if (!IS_ALIGNED(from, sizeof(u64)) &&
+ ((from ^ (from + len_s)) & GENMASK(31, 10))) {
+ /*
+ * Workaround reads/writes across 1k-aligned addresses
+ * (start u32 before 1k, end u32 after)
+ * as this fails on hardware.
+ */
+ u32 data = idg_nvm_read32(nvm, from);
+
+ if (idg_nvm_error(nvm))
+ return -EIO;
+ memcpy(&buf[0], &data, sizeof(data));
+ len_s -= sizeof(u32);
+ buf += sizeof(u32);
+ from += sizeof(u32);
+ }
+
len8 = ALIGN_DOWN(len_s, sizeof(u64));
for (i = 0; i < len8; i += sizeof(u64)) {
u64 data = idg_nvm_read64(nvm, from + i);
--
2.43.0
Powered by blists - more mailing lists