lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210105182728.GG6908@magnolia>
Date:   Tue, 5 Jan 2021 10:27:28 -0800
From:   "Darrick J. Wong" <darrick.wong@...cle.com>
To:     Russell King - ARM Linux admin <linux@...linux.org.uk>
Cc:     linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
        "Theodore Ts'o" <tytso@....edu>,
        Andreas Dilger <adilger.kernel@...ger.ca>,
        linux-ext4@...r.kernel.org, Will Deacon <will@...nel.org>
Subject: Re: Aarch64 EXT4FS inode checksum failures - seems to be weak memory
 ordering issues

On Tue, Jan 05, 2021 at 03:47:26PM +0000, Russell King - ARM Linux admin wrote:
> Hi,
> 
> This is an update on where I am with this long standing issue at the
> current time.
> 
> Since 5.4, I have been struggling with several of my ARM64 systems, of
> different SoC vendors and differing filesystem media, were sporadically
> reporting inode checksum failures on their root filesystems.  The time
> taken to report this has been anything between a few hours and three
> months of uptime, making the problem unrealistic to bisect.

Aha, I was wondering what happened to this bug report. :)

> The issue was first seen on my SolidRun Clearfog CX LX2160A based
> system, but was also subsequently noticed on my Armada 8040 based
> systems running kernels 5.4 and later. Kernel 5.2 has proven stable
> with 566 days of uptime with no issue.
> 
> It has taken a long time to get debugging in place to see what is going
> on - and this is currently detailed on the front page of
> www.armlinux.org.uk right now, which has formed a blog of this problem
> - since almost no one has taken any interest in it.
> 
> However, over the last couple of days, a way to reproduce it has been
> found, at least for the LX2160A based system.  Power down, leave the
> machine powered off for some time. Power up, log in and run:
> 
> while :; do sleep 5; find /var /usr /bin /sbin -type f -print0 | \
> 	xargs -0 md5sum >/dev/null; done

Does that fill up the page cache enough to push memory reclaim?

> Within a few minutes it seems to have spat out an inode checksum
> failure if the problem exists. However, testing for the problem _not_
> existing is quite difficult - just because it doesn't appear in the
> first few minutes does not mean it has been solved - see above where it
> can take three months.
> 
> However, evidence is currently pointing towards commit 22ec71615d82
> ("arm64: io: Relax implicit barriers in default I/O accessors") having
> revealed this problem. Will is very certain that this change is
> correct, and we feel that it may have exposed some other issue in the
> Aarch64 code.
> 
> Further attempts seem to suggest that the problem is specifically the
> barrier in __iormb(). Leaving __iowmb() untouched, and changing the
> barrier in __iormb() from dma_rmb() to rmb() _appears_ to result in the
> problem disappearing. "Appears" is stressed because further testing is
> needed - and that is probably going to take many months before we know
> for certain.
> 
> However, this suggests that there is a memory ordering bug with aarch64
> somewhere. Will can follow up with his own thoughts to this email.
> 
> We don't know if it is:
> - the kernel.
> - the Cortex A72.
> - the Cache coherent interconnect.
> 
> I don't think it's the CCI, as I believe the Armada 8040 uses Marvell's
> own IP for that based around Aurora 2 (the functional spec doesn't make
> it clear.) Remember, I'm seeing this problem on both Armada 8040 and
> LX2160A. We don't know of any known errata for the A72 in this area.
> So, we're down to something in the kernel.
> 
> It is possible that it could be compiler related, but I don't see that;
> if the "dmb oshld" were strong enough, then it should mean that the
> subsequent reads to checksum the inode data after the inode data has
> been DMA'd into memory should be reading the correct values from memory
> already - but they aren't. And if changing "dmb oshld" to "dsb ld" means
> that the code can then read the right values, that to me points fairly
> definitively to a hardware problem.
> 
> Now, ext4fs is pretty good at checksumming the metadata in the
> filesystem - each inode is individually checksummed with CRC32C and two
> 16-bit halves are stored in each inode. Directories are also
> checksummed. ext4fs validates the inode checksum on every ext4_iget()
> call. Do other filesystems do similar?

XFS and ext4 both validate the ondisk csum when constructing their
incore inodes, and set them when flushing the incore inode back to disk.

I vaguely wonder if there's something else going on here, like ... a
background memory reclaim thread running on one cpu writes out an inode
core with new checksum (because reading the file bumped the atime), and
then another cpu comes along and has to reconstitute the (just
reclaimed) incore inode, but for whatever reason doesn't get the version
that the other cpu just wrote?

That's like 130% speculative though, and note that I have no idea what
the "outer shareable" domain[1] is.

[1] https://developer.arm.com/docs/ddi0597/h/base-instructions-alphabetic-order/dmb-data-memory-barrier

> Anyway, here is the patch I'm currently running, which _seems_ so far
> to be the minimal fix for my problems. Will thinks that this is hiding
> the real problem by adding barriers, but I don't see there's much
> choice but to apply this - I don't see what other debugging could be
> done without the use of expensive hardware simulation, or detailed
> hardware level tracing - the kind of which a silicon vendor or ARM Ltd
> would have.

(FWIW I haven't seen checksum errors on xfs or ext4 on arm64, though
most of my testing is relegated to beating on a raspberry pi very
slowly...)

> I'm at the end of what I can do with this; I'm going to keep this patch
> in my kernel, since it fixes it for me.

Well if you've managed to hit this on multiple different machines after
a long soak time, I wonder how many other people will trip over this too.
It wouldn't be the first time a fs stunts performance to avoid
corruption. ;)

> Will would like a reliable reproducer - yes, that would be ideal, but
> I'm afraid that's a mamoth task in itself. It's taken a year to find
> this method of reproducing it.
> 
> There's also the matter that in one case I've seen, the ext4 checksum
> has been wrong. The subsequent hexdump has been correct, and the post-
> hexdump checksum recalculation has remained incorrect - and the same
> value as the first incorrect checksum. However, the inode with the
> _same_ checksum has subsequently validated correctly by the kernel and
> by e2fsck. I can not explain this.

Strange.  You're just using the same ext4_inode_csum() that everything
else uses, right?

--D

> 
> diff --git a/arch/arm64/include/asm/io.h b/arch/arm64/include/asm/io.h
> index ff50dd731852..be63c47aecc4 100644
> --- a/arch/arm64/include/asm/io.h
> +++ b/arch/arm64/include/asm/io.h
> @@ -95,7 +95,7 @@ static inline u64 __raw_readq(const volatile void __iomem *addr)
>  ({									\
>  	unsigned long tmp;						\
>  									\
> -	dma_rmb();								\
> +	rmb();								\
>  									\
>  	/*								\
>  	 * Create a dummy control dependency from the IO read to any	\
> 
> -- 
> RMK's Patch system: https://www.armlinux.org.uk/developer/patches/
> FTTP is here! 40Mbps down 10Mbps up. Decent connectivity at last!

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ