lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Wed, 23 Nov 2022 15:49:53 +0100
From:   Geert Uytterhoeven <geert@...ux-m68k.org>
To:     linux-renesas-soc@...r.kernel.org
Cc:     Vignesh Raghavendra <vigneshr@...com>,
        Sergey Shtylyov <s.shtylyov@....ru>,
        Krzysztof Kozlowski <krzysztof.kozlowski@...aro.org>,
        Wolfram Sang <wsa+renesas@...g-engineering.com>,
        Lad Prabhakar <prabhakar.mahadev-lad.rj@...renesas.com>,
        Miquel Raynal <miquel.raynal@...tlin.com>,
        Richard Weinberger <richard@....at>,
        Mark Brown <broonie@...nel.org>, linux-mtd@...ts.infradead.org,
        linux-spi@...r.kernel.org, linux-kernel@...r.kernel.org,
        Geert Uytterhoeven <geert+renesas@...der.be>
Subject: Re: [PATCH 7/7] memory: renesas-rpc-if: Reinitialize registers during
 system resume

On Mon, Jun 27, 2022 at 5:31 PM Geert Uytterhoeven
<geert+renesas@...der.be> wrote:
> During PSCI system suspend, R-Car Gen3 SoCs may be powered down, and
> thus the RPC-IF register state may be lost.  Consequently, when using
> the RPC-IF after system resume, data corruption may happen.
>
> Fix this by reinitializing the hardware state during system resume.
> As this requires resuming the RPC-IF core device, this can only be done
> when the device is under active control of the HyperBus or SPI child
> driver.
>
> Signed-off-by: Geert Uytterhoeven <geert+renesas@...der.be>

For v2, I dropped this patch from the series.

Apparently this patch is not needed, nor does it have any impact on
HyperFLASH read operation on Salvator-XS with R-Car M3-N ES1.0 and
Ebisu-4D with R-Car E3 ES1.0.

On Salvator-X with R-Car M3-W ES1.0, there is a different issue causing
random bit flips (which is not solved by the Strobe Timing Adjustment
bit (STRTIM) fix for R-Car M3-W ES1.x in the BSP).

On Salvator-XS with R-Car H3-ES2.0, corruption is seen after s2ram.
TL;DR: while this patch does have impact on that, RPC operation after
s2ram is still not guaranteed, and the core issue is still not
understood.

---
For testing, I use the following command to read /dev/mtdblock1 (which
contains the BL2 bootloader) and check its checksum 100 times:

    time sha256sum -c <(yes $(cat mtdblock1.sha256sum) | head -100)

After boot and s2idle, the success rate is 100%.

1. Without this patch, the failure rate after s2ram is ca. 65%.

When splitting and comparing the data read back, some blocks of 64 KiB
(but not always the same on different runs) have been replaced by bad
data, containing either data from elsewhere, or all ones.  The latter is
probably the same symptom as the former, as the HyperFLASH does contain
blocks with all-ones data.

The data from elsewhere looks like e.g.:

    00046f10  75 75 69 64 5f 64 69 73  75 75 69 64 5f 64 69 73
|uuid_disuuid_dis|
    00046f20  64 72 27 0a 20 20 20 20  64 72 27 0a 20 20 20 20  |dr'.
  dr'.    |
    00046f30  6c 69 67 6e 65 64 20 74  6c 69 67 6e 65 64 20 74
|ligned tligned t|
    00046f40  61 64 64 72 32 20 63 6f  61 64 64 72 32 20 63 6f  |addr2
coaddr2 co|
    00046f50  0a 6d 6d 63 20 72 70 6d  0a 6d 6d 63 20 72 70 6d  |.mmc
rpm.mmc rpm|
    00046f60  20 6c 61 72 67 65 72 0a  20 6c 61 72 67 65 72 0a  |
larger. larger.|
    00046f70  6d 32 00 63 6d 6d 31 00  6d 32 00 63 6d 6d 31 00
|m2.cmm1.m2.cmm1.|
    00046f80  32 5f 64 61 74 61 34 00  32 5f 64 61 74 61 34 00
|2_data4.2_data4.|
    00046f90  31 00 47 50 5f 35 5f 31  31 00 47 50 5f 35 5f 31
|1.GP_5_11.GP_5_1|
    00046fa0  65 78 63 65 65 64 65 64  65 78 63 65 65 64 65 64
|exceededexceeded|
    00046fb0  30 34 2d 72 63 34 2d 30  30 34 2d 72 63 34 2d 30
|04-rc4-004-rc4-0|

which seems to originate from two copies of the first 8 bytes of each of
the following lines, found elsewhere in the HyperFLASH:

    006f1000  75 75 69 64 5f 64 69 73  6b 3d 00 6e 61 6d 65 3d
|uuid_disk=.name=|
    ...
    006f2000  64 72 27 0a 20 20 20 20  20 20 70 61 73 73 69 6e  |dr'.
    passin|
    ...
    006f3000  6c 69 67 6e 65 64 20 74  6f 0a 20 20 20 20 20 20
|ligned to.      |
    ...
    006f4000  61 64 64 72 32 20 63 6f  75 6e 74 00 6d 65 6d 6f  |addr2
count.memo|
    ...
    006f5000  0a 6d 6d 63 20 72 70 6d  62 20 6b 65 79 20 3c 61  |.mmc
rpmb key <a|
    ...
    006f6000  20 6c 61 72 67 65 72 0a  00 75 6e 7a 69 70 00 75  |
larger..unzip.u|
    ...
    006f7000  6d 32 00 63 6d 6d 31 00  63 6d 6d 30 00 63 73 69
|m2.cmm1.cmm0.csi|
    ...
    006f8000  32 5f 64 61 74 61 34 00  73 64 68 69 32 5f 64 61
|2_data4.sdhi2_da|
    ...
    006f9000  31 00 47 50 5f 35 5f 31  32 00 47 50 5f 35 5f 31
|1.GP_5_12.GP_5_1|
    ...
    006fa000  65 78 63 65 65 64 65 64  00 0a 25 73 3b 20 73 74
|exceeded..%s; st|
    ...
    006fb000  30 34 2d 72 63 34 2d 30  30 30 38 32 2d 67 35 34
|04-rc4-00082-g54|

For both hexdumps above, offsets are absolute FLASH offsets, not relative
partition offsets.  Still, there is some similarity (e.g. 0x46f10 vs.
0x6f1000).

2. With this patch, the failure rate of 100 reads after s2ram is either
0%, or 100%.  The same is true after subsequent s2ram operations.

Contrary to before this patch, in case of corruption, the data read back
is always the same: i.e. either the good data is read back, or the exact
same bad data is read back.  In case of bad data, some blocks of 64 KiB
have been replaced by blocks containing either data from elsewhere,
or all ones again....

Gr{oetje,eeting}s,

                        Geert

--
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@...ux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                -- Linus Torvalds

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ