[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <Z4r99SQYV9v5vBuR@kbusch-mbp>
Date: Fri, 17 Jan 2025 18:03:49 -0700
From: Keith Busch <kbusch@...nel.org>
To: Stefan <linux-kernel@...g.de>
Cc: Christoph Hellwig <hch@....de>,
Thorsten Leemhuis <regressions@...mhuis.info>,
bugzilla-daemon@...nel.org, Bruno Gravato <bgravato@...il.com>,
Adrian Huang <ahuang12@...ovo.com>,
Linux kernel regressions list <regressions@...ts.linux.dev>,
linux-nvme@...ts.infradead.org, Jens Axboe <axboe@...com>,
"iommu@...ts.linux.dev" <iommu@...ts.linux.dev>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [Bug 219609] File corruptions on SSD in 1st M.2 socket of AsRock
X600M-STX + Ryzen 8700G
On Fri, Jan 17, 2025 at 10:31:55PM +0100, Stefan wrote:
> As already mentioned, my SSD has no DRAM and uses HMB (Host memory
> buffer).
HMB and volatile write caches are not necessarily intertwined. A device
can have both. Generally speaking, you'd expect the HMB to have SSD
metadata, not user data, where a VWC usually just has user data. The
spec also requires the device maintain data integrity even with an
unexpected sudden loss of access to the HMB, but that isn't the case
with a VWC.
>(It has non-volatile SLC cache.) Disabling volatile write cache
> has no significant effect on read/write performance of large files,
Devices are free to have whatever hierarchy of non-volatile caches they
want without advertising that to the host, but if they're calling those
"volatile" then I think something has been misinterpreted.
> because the HMB size in only 40MB. But things like file deletions may be
> slower.
>
> AFAIS the corruption occur with both kinds of SSD's, the ones that have
> own DRAM and he ones that use HMB.
Yeah, that was the point of the experiment. If corruption happens when
it's off, then that helps rule out host buffer size/alignment (which is
where this bz started) as a triggering condition. Disabling VWC is not a
"fix", it's just a debug data point. If corruption goes away with it
off, though, then we can't really conclude anything for this issue.
Powered by blists - more mailing lists