[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aCUWWBmhAOFHDszj@dread.disaster.area>
Date: Thu, 15 May 2025 08:16:56 +1000
From: Dave Chinner <david@...morbit.com>
To: Christoph Hellwig <hch@...radead.org>
Cc: cen zhang <zzzccc427@...il.com>, cem@...nel.org,
linux-xfs@...r.kernel.org, linux-kernel@...r.kernel.org,
baijiaju1990@...il.com, zhenghaoran154@...il.com
Subject: Re: Subject: [BUG] Five data races in in XFS Filesystem,one
potentially harmful
On Wed, May 14, 2025 at 06:17:15AM -0700, Christoph Hellwig wrote:
> > 1. Race in `xfs_bmapi_reserve_delalloc()` and `xfs_vn_getattr()`
> > ----------------------------------------------------------------
> >
> > A data race on `ip->i_delayed_blks`.
>
> This is indeed a case for data_race as getattr is just reporting without any
> locks. Can you send a patch?
No, please don't play data_race() whack-a-mole with
xfs_vn_getattr().
Please introduce infrastructure that allows us to mark entire
functions with something like __attribute__(data_race) so the
sanitiser infrastructure knows that all accesses within that
function are known to be potentially racy and should not be warned
about.
We can then mark every ->getattr method in every filesystem the same
way in a single patch, and knock out that entire class of false
positive in one hit. That's a much more efficient way of dealing
with this problem than one false positive at a time.
> > 2. Race on `xfs_trans_ail_update_bulk` in `xfs_inode_item_format`
> > -------------------------------------.
> >
> > We observed unsynchronized access to `lip->li_lsn`, which may exhibit
> > store/load tearing. However, we did not observe any symptoms
> > indicating harmful behavior.
>
> I think we'll need READ_ONCE/WRITE_ONCE here to be safe on 64-bit
> systems to avoid tearing/reordering. But that still won't help
> with 32-bit systems.
We had problems with LSN tearing on 32 bit systems 20-odd years ago
back at SGI on MIPS Irix systems. This is why
xfs_trans_ail_copy_lsn() exists - LSNs are only even updated under
the AIL lock, so any read that might result in a critical tear (e.g.
flush lsn tracking in inodes and quots) was done under the AIL
lock on 32 bit systems.
> Other lsn fields use an atomic64_t for, which
> is pretty heavy-handed.
They use atomic64_t because they aren't protected by a specific lock
anymore. This was not done for torn read/write avoidance, but for
scalability optimisation. There is no reason for lip->li_lsn to be
an atomic, as all updates are done under the same serialising lock
(the ail->ail_lock).
As for reordering, nothing that is reading the lip->li_lsn should be
doing so in a place where compiler reordering should make any
difference. It's only updated in two places (set on AIL insert,
cleared on AIL remove) and neither of these two things will race
with readers using the lsn for fsync/formatting/verifier purposes.
I think that even the old use of xfs_trans_ail_copy_lsn() is likely no
longer necessary because flushing of dquots/inodes and reading the
LSN are now fully gated on the objects being locked and unpinned. The
LSN updates occur whilst the object is pinned and pinning can
only occur whilst the object is locked. Hence we -cannot- be doing
simultaneous lip->li_lsn updates and reading lip->li_lsn for
formatting purposes....
We extensively copy LSNs into the V5 on disk format with "racy"
reads and these on disk LSNs are critical to correct recovery
processing. If torn lip->li_lsn reads are actually happening then we
should be seeing this in random whacky recovery failures on
platforms where this happens. The V5 format has been around for well
over a decade now, so we should have seen somei evidence of this if
torn LSN reads were actually a real world problem.
> > Function: xfs_alloc_longest_free_extent+0x164/0x580
>
> > Function: xfs_alloc_update_counters+0x238/0x720 fs/xfs/libxfs/xfs_alloc.c:908
>
> Both of these should be called with b_sema held.
Definitely not. Yes xfs_alloc_update_counters() must be called with
the AGF locked, but that's because it's -modifying the AGF-. The
update of the perag piggybacks on this so we don't lose writes. i.e.
we get write vs write serialisation here, we are explicitly not
trying to provide write vs read serialisation.
That's because the AG selection algorithm in
xfs_bmap_btalloc_select_lengths() is an optimistic, unlocked
algorithm. It always has been. It uses the in-memory
pag variables first to select an AG, and we don't care if we race
with an ongoing allocation, because if we select that AG we will
recheck the selection (i.e. the free space info in the pag) once
we've locked the AGF in xfs_alloc_fix_freelist().
IOWs, this is simply another unlocked check, lock, check again
pattern. it's a lot further apart than your typical single logic
statement like:
if (foo) {
lock(foo_lock)
if (foo) {
/* do something */
}
unlock(foo_lock)
}
But it is exactly the same logic pattern where no binding decision
is made until all the correct locks are held.
As I've already said - the patterns are endemic in XFS. They may not
be as obvious as the common if - lock - if structure above, but
that's just the simplest form of this common lock avoidance pattern.
IOWs, these data races need a lot more careful analysis and
knowledge of what problem the unlocked reads are solving to
determine what the correct fix might be.
To me, having to add READ_ONCE() or data_race() - and maybe comments
- to hundreds of variable accesses across the code base adds noise
without really adding anything of value. This isn't finding new
bugs - it's largely crying wolf about structures and algorithms we
intentionally designed to work this way long ago...
> Does your tool
> treat a semaphore with an initial count of 1 as a lock? That is still
> a pattern in Linux as mutexes don't allow non-owner unlocks.
If it did, then the bp->b_addr init race wouldn't have been flagged
as an issue.
-Dave.
--
Dave Chinner
david@...morbit.com
Powered by blists - more mailing lists