[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANubcdULTQo5jF7hGSWFqXw6v5DhEg=316iFNipMbsyz64aneg@mail.gmail.com>
Date: Sat, 22 Nov 2025 14:42:43 +0800
From: Stephen Zhang <starzhangzsd@...il.com>
To: Ming Lei <ming.lei@...hat.com>
Cc: linux-kernel@...r.kernel.org, linux-block@...r.kernel.org,
nvdimm@...ts.linux.dev, virtualization@...ts.linux.dev,
linux-nvme@...ts.infradead.org, gfs2@...ts.linux.dev, ntfs3@...ts.linux.dev,
linux-xfs@...r.kernel.org, zhangshida@...inos.cn
Subject: Re: Fix potential data loss and corruption due to Incorrect BIO Chain Handling
Ming Lei <ming.lei@...hat.com> 于2025年11月22日周六 11:35写道:
>
> On Fri, Nov 21, 2025 at 04:17:39PM +0800, zhangshida wrote:
> > From: Shida Zhang <zhangshida@...inos.cn>
> >
> > Hello everyone,
> >
> > We have recently encountered a severe data loss issue on kernel version 4.19,
> > and we suspect the same underlying problem may exist in the latest kernel versions.
> >
> > Environment:
> > * **Architecture:** arm64
> > * **Page Size:** 64KB
> > * **Filesystem:** XFS with a 4KB block size
> >
> > Scenario:
> > The issue occurs while running a MySQL instance where one thread appends data
> > to a log file, and a separate thread concurrently reads that file to perform
> > CRC checks on its contents.
> >
> > Problem Description:
> > Occasionally, the reading thread detects data corruption. Specifically, it finds
> > that stale data has been exposed in the middle of the file.
> >
> > We have captured four instances of this corruption in our production environment.
> > In each case, we observed a distinct pattern:
> > The corruption starts at an offset that aligns with the beginning of an XFS extent.
> > The corruption ends at an offset that is aligned to the system's `PAGE_SIZE` (64KB in our case).
> >
> > Corruption Instances:
> > 1. Start:`0x73be000`, **End:** `0x73c0000` (Length: 8KB)
> > 2. Start:`0x10791a000`, **End:** `0x107920000` (Length: 24KB)
> > 3. Start:`0x14535a000`, **End:** `0x145b70000` (Length: 8280KB)
> > 4. Start:`0x370d000`, **End:** `0x3710000` (Length: 12KB)
> >
> > After analysis, we believe the root cause is in the handling of chained bios, specifically
> > related to out-of-order io completion.
> >
> > Consider a bio chain where `bi_remaining` is decremented as each bio in the chain completes.
> > For example,
> > if a chain consists of three bios (bio1 -> bio2 -> bio3) with
> > bi_remaining count:
> > 1->2->2
>
> Right.
>
> > if the bio completes in the reverse order, there will be a problem.
> > if bio 3 completes first, it will become:
> > 1->2->1
>
> Yes.
>
> > then bio 2 completes:
> > 1->1->0
>
> No, it is supposed to be 1->1->1.
>
> When bio 1 completes, it will become 0->0->0
>
> bio3's `__bi_remaining` won't drop to zero until bio2's reaches
> zero, and bio2 won't be done until bio1 is ended.
>
> Please look at bio_endio():
>
> void bio_endio(struct bio *bio)
> {
> again:
> if (!bio_remaining_done(bio))
> return;
> ...
> if (bio->bi_end_io == bio_chain_endio) {
> bio = __bio_chain_endio(bio);
> goto again;
> }
> ...
> }
>
Exactly, bio_endio handle the process perfectly, but it seems to forget
to check if the very first `__bi_remaining` drops to zero and proceeds to
the next bio:
-----
static struct bio *__bio_chain_endio(struct bio *bio)
{
struct bio *parent = bio->bi_private;
if (bio->bi_status && !parent->bi_status)
parent->bi_status = bio->bi_status;
bio_put(bio);
return parent;
}
static void bio_chain_endio(struct bio *bio)
{
bio_endio(__bio_chain_endio(bio));
}
----
Thanks,
Shida
>
> Thanks,
> Ming
>
Powered by blists - more mailing lists