[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <q4jjkhgpahmrr3z7d5qn7qhml3kqtj3roybuykkhfefxlezdbf@y4lbf6ut4siw>
Date: Tue, 24 Sep 2024 07:30:44 -0400
From: Kent Overstreet <kent.overstreet@...ux.dev>
To: David Wang <00107082@....com>
Cc: linux-bcachefs@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [BUG?] bcachefs performance: read is way too slow when a file
has no overwrite.
On Tue, Sep 24, 2024 at 07:08:07PM GMT, David Wang wrote:
> Hi,
>
> At 2024-09-07 18:34:37, "David Wang" <00107082@....com> wrote:
> >At 2024-09-07 01:38:11, "Kent Overstreet" <kent.overstreet@...ux.dev> wrote:
> >>That's because checksums are at extent granularity, not block: if you're
> >>doing O_DIRECT reads that are smaller than the writes the data was
> >>written with, performance will be bad because we have to read the entire
> >>extent to verify the checksum.
> >
> >
>
> >Based on the result:
> >1. The row with prepare-write size 4K stands out, here.
> >When files were prepaired with write size 4K, the afterwards
> > read performance is worse. (I did double check the result,
> >but it is possible that I miss some affecting factors.);
> >2. Without O_DIRECT, read performance seems correlated with the difference
> > between read size and prepare write size, but with O_DIRECT, correlation is not obvious.
> >
> >And, to mention it again, if I overwrite the files **thoroughly** with fio write test
> >(using same size), the read performance afterwards would be very good:
> >
>
> Update some IO pattern (bio start address and size, in sectors, address&=-address),
> between bcachefs and block layer:
>
> 4K-Direct-Read a file created by loop of `write(fd, buf, 1024*4)`:
You're still testing small reads to big extents. Flip off data
checksumming if you want to test that, or wait for block granular
checksums to land.
I already explained what's going on, so this isn't very helpful.
Powered by blists - more mailing lists