[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c017d992-2746-045b-47c8-c5b9c3025f1a@samsung.com>
Date: Fri, 3 Jun 2022 14:54:21 +0200
From: Marek Szyprowski <m.szyprowski@...sung.com>
To: Hsin-Yi Wang <hsinyi@...omium.org>,
Phillip Lougher <phillip@...ashfs.org.uk>,
Matthew Wilcox <willy@...radead.org>,
Xiongwei Song <Xiongwei.Song@...driver.com>
Cc: Zheng Liang <zhengliang6@...wei.com>,
Zhang Yi <yi.zhang@...wei.com>, Hou Tao <houtao1@...wei.com>,
Miao Xie <miaoxie@...wei.com>,
Andrew Morton <akpm@...ux-foundation.org>,
"linux-mm @ kvack . org" <linux-mm@...ck.org>,
"squashfs-devel @ lists . sourceforge . net"
<squashfs-devel@...ts.sourceforge.net>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v4 3/3] squashfs: implement readahead
Hi,
On 01.06.2022 12:39, Hsin-Yi Wang wrote:
> Implement readahead callback for squashfs. It will read datablocks
> which cover pages in readahead request. For a few cases it will
> not mark page as uptodate, including:
> - file end is 0.
> - zero filled blocks.
> - current batch of pages isn't in the same datablock or not enough in a
> datablock.
> - decompressor error.
> Otherwise pages will be marked as uptodate. The unhandled pages will be
> updated by readpage later.
>
> Suggested-by: Matthew Wilcox <willy@...radead.org>
> Signed-off-by: Hsin-Yi Wang <hsinyi@...omium.org>
> Reported-by: Matthew Wilcox <willy@...radead.org>
> Reported-by: Phillip Lougher <phillip@...ashfs.org.uk>
> Reported-by: Xiongwei Song <Xiongwei.Song@...driver.com>
> ---
This patch landed recently in linux-next as commit 95f7a26191de
("squashfs: implement readahead"). I've noticed that it causes serious
issues on my test systems (various ARM 32bit and 64bit based boards).
The easiest way to observe is udev timeout 'waiting for /dev to be fully
populated' and prolonged booting time. I'm using squashfs for deploying
kernel modules via initrd. Reverting aeefca9dfae7 & 95f7a26191deon on
top of the next-20220603 fixes the issue.
Let me know how I can help debugging this issue. There is no hurry
though, because the next week I will be on holidays.
> v3->v4: Fix a few variable type and their locations.
> v3: https://lore.kernel.org/lkml/20220523065909.883444-4-hsinyi@chromium.org/
> v2: https://lore.kernel.org/lkml/20220517082650.2005840-4-hsinyi@chromium.org/
> v1: https://lore.kernel.org/lkml/20220516105100.1412740-3-hsinyi@chromium.org/
> ---
> fs/squashfs/file.c | 97 +++++++++++++++++++++++++++++++++++++++++++++-
> 1 file changed, 96 insertions(+), 1 deletion(-)
>
> diff --git a/fs/squashfs/file.c b/fs/squashfs/file.c
> index a8e495d8eb86..df7ad4b3e99c 100644
> --- a/fs/squashfs/file.c
> +++ b/fs/squashfs/file.c
> @@ -39,6 +39,7 @@
> #include "squashfs_fs_sb.h"
> #include "squashfs_fs_i.h"
> #include "squashfs.h"
> +#include "page_actor.h"
>
> /*
> * Locate cache slot in range [offset, index] for specified inode. If
> @@ -495,7 +496,101 @@ static int squashfs_read_folio(struct file *file, struct folio *folio)
> return 0;
> }
>
> +static void squashfs_readahead(struct readahead_control *ractl)
> +{
> + struct inode *inode = ractl->mapping->host;
> + struct squashfs_sb_info *msblk = inode->i_sb->s_fs_info;
> + size_t mask = (1UL << msblk->block_log) - 1;
> + unsigned short shift = msblk->block_log - PAGE_SHIFT;
> + loff_t start = readahead_pos(ractl) &~ mask;
> + size_t len = readahead_length(ractl) + readahead_pos(ractl) - start;
> + struct squashfs_page_actor *actor;
> + unsigned int nr_pages = 0;
> + struct page **pages;
> + int i, file_end = i_size_read(inode) >> msblk->block_log;
> + unsigned int max_pages = 1UL << shift;
> +
> + readahead_expand(ractl, start, (len | mask) + 1);
> +
> + if (file_end == 0)
> + return;
> +
> + pages = kmalloc_array(max_pages, sizeof(void *), GFP_KERNEL);
> + if (!pages)
> + return;
> +
> + actor = squashfs_page_actor_init_special(pages, max_pages, 0);
> + if (!actor)
> + goto out;
> +
> + for (;;) {
> + pgoff_t index;
> + int res, bsize;
> + u64 block = 0;
> + unsigned int expected;
> +
> + nr_pages = __readahead_batch(ractl, pages, max_pages);
> + if (!nr_pages)
> + break;
> +
> + if (readahead_pos(ractl) >= i_size_read(inode) ||
> + nr_pages < max_pages)
> + goto skip_pages;
> +
> + index = pages[0]->index >> shift;
> + if ((pages[nr_pages - 1]->index >> shift) != index)
> + goto skip_pages;
> +
> + expected = index == file_end ?
> + (i_size_read(inode) & (msblk->block_size - 1)) :
> + msblk->block_size;
> +
> + bsize = read_blocklist(inode, index, &block);
> + if (bsize == 0)
> + goto skip_pages;
> +
> + res = squashfs_read_data(inode->i_sb, block, bsize, NULL,
> + actor);
> +
> + if (res == expected) {
> + int bytes;
> +
> + /* Last page may have trailing bytes not filled */
> + bytes = res % PAGE_SIZE;
> + if (bytes) {
> + void *pageaddr;
> +
> + pageaddr = kmap_atomic(pages[nr_pages - 1]);
> + memset(pageaddr + bytes, 0, PAGE_SIZE - bytes);
> + kunmap_atomic(pageaddr);
> + }
> +
> + for (i = 0; i < nr_pages; i++)
> + SetPageUptodate(pages[i]);
> + }
> +
> + for (i = 0; i < nr_pages; i++) {
> + unlock_page(pages[i]);
> + put_page(pages[i]);
> + }
> + }
> +
> + kfree(actor);
> + kfree(pages);
> + return;
> +
> +skip_pages:
> + for (i = 0; i < nr_pages; i++) {
> + unlock_page(pages[i]);
> + put_page(pages[i]);
> + }
> +
> + kfree(actor);
> +out:
> + kfree(pages);
> +}
>
> const struct address_space_operations squashfs_aops = {
> - .read_folio = squashfs_read_folio
> + .read_folio = squashfs_read_folio,
> + .readahead = squashfs_readahead
> };
Best regards
--
Marek Szyprowski, PhD
Samsung R&D Institute Poland
Powered by blists - more mailing lists