lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 5 Jul 2023 17:41:01 +0200
From:   Marc Gonzalez <marc.w.gonzalez@...e.fr>
To:     LKML <linux-kernel@...r.kernel.org>, linux-mm@...ck.org
Cc:     Vladimir Murzin <vladimir.murzin@....com>,
        Will Deacon <will.deacon@....com>,
        Mark Rutland <mark.rutland@....com>,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...nel.org>,
        Tomas Mudrunka <tomas.mudrunka@...il.com>,
        HPeter Anvin <hpa@...or.com>, Arnd Bergmann <arnd@...nel.org>,
        Ard Biesheuvel <ard.biesheuvel@...aro.org>
Subject: RFC: Faster memtest (possibly bypassing data cache)

Hello,

When dealing with a few million devices (x86 and arm64),
it is statistically expected to have "a few" devices with
at least one bad RAM cell. (How many?)

For one particular model, we've determined that ~0.1% have
at least one bad RAM cell (ergo, a few thousand devices).

I've been wondering if someone more experienced knows:
Are these RAM cells bad from the start, or do they become bad
with time? (I assume both failure modes exist.)

Once the first bad cell is detected, is it more likely
to detect other bad cells as time goes by?
In other words, what are the failure modes of ageing RAM?


Closing the HW tangent, focusing on the SW side of things:

Since these bad RAM cells wreak havoc for the device's user,
especially with ASLR (different stuff crashes across reboots),
I've been experimenting with mm/memtest.c as a first line
of defense against bad RAM cells.

However, I have a run into a few issues.

Even though early_memtest is called, well... early, memory has
already been mapped as regular *cached* memory.

This means that when we test an area smaller than L3 cache, we're
not even hitting RAM, we're just testing the cache hierarchy.
I suppose it /might/ make sense to test the cache hierarchy,
as it could(?) have errors as well?
However, I suspect defects in cache are much more rare
(and thus detection might not be worth the added run-time).

On x86, I ran a few tests using SIMD non-temporal stores
(to bypass the cache on stores), and got 30% reduction in run-time.
(Minimal run-time is critical for being able to deploy the code
to millions of devices for the benefit of a few thousand users.)
AFAIK, there are no non-temporal loads, the normal loads probably
thrashed the data cache.

I was hoping to be able to test a different implementation:

When we enter early_memtest(), we remap [start, end]
as UC (or maybe WC?) so as to entirely bypass the cache.
We read/write using the largest size available for stores/loads,
e.g. entire cache lines on recent x86 HW.
Then when we leave, we remap as was done originally.

Is that possible?

Hopefully, the other cores are not started at this point?
(Otherwise this whole charade would be pointless.)

To summarize: is it possible to tweak memtest to make it
run faster while testing RAM in all cases?

Regards,

Marc Gonzalez

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ