[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <be2a500d-f8f3-f813-cb9e-04ac1726e22d@linux.alibaba.com>
Date: Fri, 18 Mar 2022 13:41:59 +0800
From: JeffleXu <jefflexu@...ux.alibaba.com>
To: dhowells@...hat.com, linux-cachefs@...hat.com, xiang@...nel.org,
chao@...nel.org, linux-erofs@...ts.ozlabs.org,
torvalds@...ux-foundation.org, gregkh@...uxfoundation.org,
willy@...radead.org, linux-fsdevel@...r.kernel.org,
joseph.qi@...ux.alibaba.com, bo.liu@...ux.alibaba.com,
tao.peng@...ux.alibaba.com, gerry@...ux.alibaba.com,
eguan@...ux.alibaba.com, linux-kernel@...r.kernel.org,
luodaowen.backend@...edance.com
Subject: Re: [PATCH v5 21/22] erofs: implement fscache-based data readahead
On 3/17/22 1:22 PM, Gao Xiang wrote:
> On Wed, Mar 16, 2022 at 09:17:22PM +0800, Jeffle Xu wrote:
>> This patch implements fscache-based data readahead. Also registers an
>> individual bdi for each erofs instance to enable readahead.
>>
>> Signed-off-by: Jeffle Xu <jefflexu@...ux.alibaba.com>
>> ---
>> fs/erofs/fscache.c | 153 +++++++++++++++++++++++++++++++++++++++++++++
>> fs/erofs/super.c | 4 ++
>> 2 files changed, 157 insertions(+)
>>
>> diff --git a/fs/erofs/fscache.c b/fs/erofs/fscache.c
>> index 82c52b6e077e..913ca891deb9 100644
>> --- a/fs/erofs/fscache.c
>> +++ b/fs/erofs/fscache.c
>> @@ -10,6 +10,13 @@ struct erofs_fscache_map {
>> u64 m_llen;
>> };
>>
>> +struct erofs_fscahce_ra_ctx {
>
> typo, should be `erofs_fscache_ra_ctx'
Oops. Thanks.
>
>> + struct readahead_control *rac;
>> + struct address_space *mapping;
>> + loff_t start;
>> + size_t len, done;
>> +};
>> +
>> static struct fscache_volume *volume;
>>
>> /*
>> @@ -199,12 +206,158 @@ static int erofs_fscache_readpage(struct file *file, struct page *page)
>> return ret;
>> }
>>
>> +static inline size_t erofs_fscache_calc_len(struct erofs_fscahce_ra_ctx *ractx,
>> + struct erofs_fscache_map *fsmap)
>> +{
>> + /*
>> + * 1) For CHUNK_BASED layout, the output m_la is rounded down to the
>> + * nearest chunk boundary, and the output m_llen actually starts from
>> + * the start of the containing chunk.
>> + * 2) For other cases, the output m_la is equal to o_la.
>> + */
>> + size_t len = fsmap->m_llen - (fsmap->o_la - fsmap->m_la);
>> +
>> + return min_t(size_t, len, ractx->len - ractx->done);
>> +}
>> +
>> +static inline void erofs_fscache_unlock_pages(struct readahead_control *rac,
>> + size_t len)
>
> Can we convert them into folios in advance? it seems much
> straight-forward to convert these...
>
> Or I have to convert them later, and it seems unnecessary...
OK I will try to use folio API in the next version.
>
>
>> +{
>> + while (len) {
>> + struct page *page = readahead_page(rac);
>> +
>> + SetPageUptodate(page);
>> + unlock_page(page);
>> + put_page(page);
>> +
>> + len -= PAGE_SIZE;
>> + }
>> +}
>> +
>> +static int erofs_fscache_ra_hole(struct erofs_fscahce_ra_ctx *ractx,
>> + struct erofs_fscache_map *fsmap)
>> +{
>> + struct iov_iter iter;
>> + loff_t start = ractx->start + ractx->done;
>> + size_t length = erofs_fscache_calc_len(ractx, fsmap);
>> +
>> + iov_iter_xarray(&iter, READ, &ractx->mapping->i_pages, start, length);
>> + iov_iter_zero(length, &iter);
>> +
>> + erofs_fscache_unlock_pages(ractx->rac, length);
>> + return length;
>> +}
>> +
>> +static int erofs_fscache_ra_noinline(struct erofs_fscahce_ra_ctx *ractx,
>> + struct erofs_fscache_map *fsmap)
>> +{
>> + struct fscache_cookie *cookie = fsmap->m_ctx->cookie;
>> + loff_t start = ractx->start + ractx->done;
>> + size_t length = erofs_fscache_calc_len(ractx, fsmap);
>> + loff_t pstart = fsmap->m_pa + (fsmap->o_la - fsmap->m_la);
>> + int ret;
>> +
>> + ret = erofs_fscache_read_pages(cookie, ractx->mapping,
>> + start, length, pstart);
>> + if (!ret) {
>> + erofs_fscache_unlock_pages(ractx->rac, length);
>> + ret = length;
>> + }
>> +
>> + return ret;
>> +}
>> +
>> +static int erofs_fscache_ra_inline(struct erofs_fscahce_ra_ctx *ractx,
>> + struct erofs_fscache_map *fsmap)
>> +{
>
> We could fold in this, since it has the only user.
OK, and "struct erofs_fscahce_ra_ctx" is not needed then.
--
Thanks,
Jeffle
Powered by blists - more mailing lists