[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <ZLjEmULp8gQ4TkGf@infradead.org>
Date: Wed, 19 Jul 2023 22:22:33 -0700
From: Christoph Hellwig <hch@...radead.org>
To: Matthew Wilcox <willy@...radead.org>
Cc: Peng Zhang <zhangpeng362@...wei.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, sidhartha.kumar@...cle.com,
akpm@...ux-foundation.org, wangkefeng.wang@...wei.com,
sunnanyong@...wei.com, Kent Overstreet <kent.overstreet@...il.com>
Subject: Re: [PATCH 2/6] mm/page_io: use a folio in sio_read_complete()
On Mon, Jul 17, 2023 at 02:40:24PM +0100, Matthew Wilcox wrote:
> > for (p = 0; p < sio->pages; p++) {
> > - struct page *page = sio->bvec[p].bv_page;
> > + struct folio *folio = page_folio(sio->bvec[p].bv_page);
> >
> > - SetPageUptodate(page);
> > - unlock_page(page);
> > + folio_mark_uptodate(folio);
> > + folio_unlock(folio);
> > }
>
> I'm kind of shocked this works today. Usually bvecs coalesce adjacent
> pages into a single entry, so you need to use a real iterator like
> bio_for_each_folio_all() to extract individual pages from a bvec.
> Maybe the sio bvec is constructed inefficiently.
sio_read_complete is a kiocb.ki_complete handler. There is no
coalesce going on for ITER_BVEC iov_iters, which share nothing
but the underlying data structure with the block I/O path.
Powered by blists - more mailing lists