[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJMQK-isVhkhGu3=QAWaDvOd9sXgNep3ZrZ6jjoa2j+h6Uc45w@mail.gmail.com>
Date: Tue, 31 May 2022 17:51:11 +0800
From: Hsin-Yi Wang <hsinyi@...omium.org>
To: Phillip Lougher <phillip@...ashfs.org.uk>,
Matthew Wilcox <willy@...radead.org>,
Xiongwei Song <Xiongwei.Song@...driver.com>
Cc: Zheng Liang <zhengliang6@...wei.com>,
Zhang Yi <yi.zhang@...wei.com>, Hou Tao <houtao1@...wei.com>,
Miao Xie <miaoxie@...wei.com>,
Andrew Morton <akpm@...ux-foundation.org>,
"linux-mm @ kvack . org" <linux-mm@...ck.org>,
"squashfs-devel @ lists . sourceforge . net"
<squashfs-devel@...ts.sourceforge.net>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 3/3] squashfs: implement readahead
On Mon, May 23, 2022 at 3:00 PM Hsin-Yi Wang <hsinyi@...omium.org> wrote:
>
> Implement readahead callback for squashfs. It will read datablocks
> which cover pages in readahead request. For a few cases it will
> not mark page as uptodate, including:
> - file end is 0.
> - zero filled blocks.
> - current batch of pages isn't in the same datablock or not enough in a
> datablock.
> - decompressor error.
> Otherwise pages will be marked as uptodate. The unhandled pages will be
> updated by readpage later.
>
> Suggested-by: Matthew Wilcox <willy@...radead.org>
> Signed-off-by: Hsin-Yi Wang <hsinyi@...omium.org>
> Reported-by: Matthew Wilcox <willy@...radead.org>
> Reported-by: Phillip Lougher <phillip@...ashfs.org.uk>
> Reported-by: Xiongwei Song <Xiongwei.Song@...driver.com>
> ---
Kindly ping on the thread. Conversations on v2:
https://patchwork.kernel.org/project/linux-mm/patch/20220517082650.2005840-4-hsinyi@chromium.org/#24869037
This version mainly addressed the error handling.
Thanks
> v2->v3: Add checks on
> - decompressed block size.
> - fill zeros if the last page is not a full page.
>
> v2: https://lore.kernel.org/lkml/20220517082650.2005840-4-hsinyi@chromium.org/
> v1: https://lore.kernel.org/lkml/20220516105100.1412740-3-hsinyi@chromium.org/
> ---
> fs/squashfs/file.c | 91 +++++++++++++++++++++++++++++++++++++++++++++-
> 1 file changed, 90 insertions(+), 1 deletion(-)
>
> diff --git a/fs/squashfs/file.c b/fs/squashfs/file.c
> index a8e495d8eb86..c311fc685fe4 100644
> --- a/fs/squashfs/file.c
> +++ b/fs/squashfs/file.c
> @@ -39,6 +39,7 @@
> #include "squashfs_fs_sb.h"
> #include "squashfs_fs_i.h"
> #include "squashfs.h"
> +#include "page_actor.h"
>
> /*
> * Locate cache slot in range [offset, index] for specified inode. If
> @@ -495,7 +496,95 @@ static int squashfs_read_folio(struct file *file, struct folio *folio)
> return 0;
> }
>
> +static void squashfs_readahead(struct readahead_control *ractl)
> +{
> + struct inode *inode = ractl->mapping->host;
> + struct squashfs_sb_info *msblk = inode->i_sb->s_fs_info;
> + size_t mask = (1UL << msblk->block_log) - 1;
> + size_t shift = msblk->block_log - PAGE_SHIFT;
> + loff_t start = readahead_pos(ractl) &~ mask;
> + size_t len = readahead_length(ractl) + readahead_pos(ractl) - start;
> + struct squashfs_page_actor *actor;
> + unsigned int nr_pages = 0;
> + struct page **pages;
> + u64 block = 0;
> + int bsize, res, i, index, bytes, expected;
> + int file_end = i_size_read(inode) >> msblk->block_log;
> + unsigned int max_pages = 1UL << shift;
> + void *pageaddr;
> +
> + readahead_expand(ractl, start, (len | mask) + 1);
> +
> + if (file_end == 0)
> + return;
> +
> + pages = kmalloc_array(max_pages, sizeof(void *), GFP_KERNEL);
> + if (!pages)
> + return;
> +
> + actor = squashfs_page_actor_init_special(pages, max_pages, 0);
> + if (!actor)
> + goto out;
> +
> + for (;;) {
> + nr_pages = __readahead_batch(ractl, pages, max_pages);
> + if (!nr_pages)
> + break;
> +
> + if (readahead_pos(ractl) >= i_size_read(inode) ||
> + nr_pages < max_pages)
> + goto skip_pages;
> +
> + index = pages[0]->index >> shift;
> + if ((pages[nr_pages - 1]->index >> shift) != index)
> + goto skip_pages;
> +
> + expected = index == file_end ?
> + (i_size_read(inode) & (msblk->block_size - 1)) :
> + msblk->block_size;
> +
> + bsize = read_blocklist(inode, index, &block);
> + if (bsize == 0)
> + goto skip_pages;
> +
> + res = squashfs_read_data(inode->i_sb, block, bsize, NULL,
> + actor);
> +
> + if (res == expected) {
> + /* Last page may have trailing bytes not filled */
> + bytes = res % PAGE_SIZE;
> + if (bytes) {
> + pageaddr = kmap_atomic(pages[nr_pages - 1]);
> + memset(pageaddr + bytes, 0, PAGE_SIZE - bytes);
> + kunmap_atomic(pageaddr);
> + }
> +
> + for (i = 0; i < nr_pages; i++)
> + SetPageUptodate(pages[i]);
> + }
> +
> + for (i = 0; i < nr_pages; i++) {
> + unlock_page(pages[i]);
> + put_page(pages[i]);
> + }
> + }
> +
> + kfree(actor);
> + kfree(pages);
> + return;
> +
> +skip_pages:
> + for (i = 0; i < nr_pages; i++) {
> + unlock_page(pages[i]);
> + put_page(pages[i]);
> + }
> +
> + kfree(actor);
> +out:
> + kfree(pages);
> +}
>
> const struct address_space_operations squashfs_aops = {
> - .read_folio = squashfs_read_folio
> + .read_folio = squashfs_read_folio,
> + .readahead = squashfs_readahead
> };
> --
> 2.36.1.124.g0e6072fb45-goog
>
Powered by blists - more mailing lists