[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200901053426.GB24560@infradead.org>
Date: Tue, 1 Sep 2020 06:34:26 +0100
From: Christoph Hellwig <hch@...radead.org>
To: Matthew Wilcox <willy@...radead.org>
Cc: Christoph Hellwig <hch@...radead.org>, linux-xfs@...r.kernel.org,
linux-fsdevel@...r.kernel.org,
"Darrick J . Wong" <darrick.wong@...cle.com>,
linux-block@...r.kernel.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 04/11] block: Add bio_for_each_thp_segment_all
On Mon, Aug 31, 2020 at 08:48:37PM +0100, Matthew Wilcox wrote:
> static void iomap_read_end_io(struct bio *bio)
> {
> int i, error = blk_status_to_errno(bio->bi_status);
>
> for (i = 0; i < bio->bi_vcnt; i++) {
> struct bio_vec *bvec = &bio->bi_io_vec[i];
This should probably use bio_for_each_bvec_all instead of directly
poking into the bio. I'd also be tempted to move the loop body into
a separate helper, but that's just a slight stylistic preference.
> size_t offset = bvec->bv_offset;
> size_t length = bvec->bv_len;
> struct page *page = bvec->bv_page;
>
> while (length > 0) {
> size_t count = thp_size(page) - offset;
>
> if (count > length)
> count = length;
> iomap_read_page_end_io(page, offset, count, error);
> page += (offset + count) / PAGE_SIZE;
Shouldn't the page_size here be thp_size?
> Maybe I'm missing something important here, but it's significantly
> simpler code -- iomap_read_end_io() goes down from 816 bytes to 560 bytes
> (256 bytes less!) iomap_read_page_end_io is inlined into it both before
> and after.
Yes, that's exactly why I think avoiding bio_for_each_segment_all is
a good idea in general.
> There is some weirdness going on with regards to bv_offset that I don't
> quite understand. In the original bvec_advance:
>
> bv->bv_page = bvec->bv_page + (bvec->bv_offset >> PAGE_SHIFT);
> bv->bv_offset = bvec->bv_offset & ~PAGE_MASK;
>
> which I cargo-culted into bvec_thp_advance as:
>
> bv->bv_page = thp_head(bvec->bv_page +
> (bvec->bv_offset >> PAGE_SHIFT));
> page_size = thp_size(bv->bv_page);
> bv->bv_offset = bvec->bv_offset -
> (bv->bv_page - bvec->bv_page) * PAGE_SIZE;
>
> Is it possible to have a bvec with an offset that is larger than the
> size of bv_page? That doesn't seem like a useful thing to do, but
> if that needs to be supported, then the code up top doesn't do that.
> We maybe gain a little bit by counting length down to 0 instead of
> counting it up to bv_len. I dunno; reading the code over now, it
> doesn't seem like that much of a difference.
Drivers can absolutely see a bv_offset that is larger due to bio
splitting. However the submitting file system should never see one
unless it creates one, which would be stupid.
And yes, eventually bv_page and bv_offset should be replaced with a
phys_addr_t bv_phys;
and life would become simpler in many places (and the bvec would
shrink for most common setups as well).
For now I'd end up with something like:
static void iomap_read_end_bvec(struct page *page, size_t offset,
size_t length, int error)
{
while (length > 0) {
size_t page_size = thp_size(page);
size_t count = min(page_size - offset, length);
iomap_read_page_end_io(page, offset, count, error);
page += (offset + count) / page_size;
length -= count;
offset = 0;
}
}
static void iomap_read_end_io(struct bio *bio)
{
int i, error = blk_status_to_errno(bio->bi_status);
struct bio_vec *bvec;
bio_for_each_bvec_all(bvec, bio, i)
iomap_read_end_bvec(bvec->bv_page, bvec->bv_offset,
bvec->bv_len, error;
bio_put(bio);
}
and maybe even merge iomap_read_page_end_io into iomap_read_end_bvec.
Powered by blists - more mailing lists