[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHbLzkp9UQ9bb4gMN-BtrSHY3uet+nSxN-wMaObrtp5yhSN5Sw@mail.gmail.com>
Date: Fri, 30 Jul 2021 14:50:58 -0700
From: Yang Shi <shy828301@...il.com>
To: Hugh Dickins <hughd@...gle.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Shakeel Butt <shakeelb@...gle.com>,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
Miaohe Lin <linmiaohe@...wei.com>,
Mike Kravetz <mike.kravetz@...cle.com>,
Michal Hocko <mhocko@...e.com>,
Rik van Riel <riel@...riel.com>,
Christoph Hellwig <hch@...radead.org>,
Matthew Wilcox <willy@...radead.org>,
"Eric W. Biederman" <ebiederm@...ssion.com>,
Alexey Gladkov <legion@...nel.org>,
Chris Wilson <chris@...is-wilson.co.uk>,
Matthew Auld <matthew.auld@...el.com>,
Linux FS-devel Mailing List <linux-fsdevel@...r.kernel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
linux-api@...r.kernel.org, Linux MM <linux-mm@...ck.org>
Subject: Re: [PATCH 03/16] huge tmpfs: remove shrinklist addition from shmem_setattr()
On Fri, Jul 30, 2021 at 12:31 AM Hugh Dickins <hughd@...gle.com> wrote:
>
> There's a block of code in shmem_setattr() to add the inode to
> shmem_unused_huge_shrink()'s shrinklist when lowering i_size: it dates
> from before 5.7 changed truncation to do split_huge_page() for itself,
> and should have been removed at that time.
>
> I am over-stating that: split_huge_page() can fail (notably if there's
> an extra reference to the page at that time), so there might be value in
> retrying. But there were already retries as truncation worked through
> the tails, and this addition risks repeating unsuccessful retries
> indefinitely: I'd rather remove it now, and work on reducing the
> chance of split_huge_page() failures separately, if we need to.
Yes, agreed. Reviewed-by: Yang Shi <shy828301@...il.com>
>
> Fixes: 71725ed10c40 ("mm: huge tmpfs: try to split_huge_page() when punching hole")
> Signed-off-by: Hugh Dickins <hughd@...gle.com>
> ---
> mm/shmem.c | 19 -------------------
> 1 file changed, 19 deletions(-)
>
> diff --git a/mm/shmem.c b/mm/shmem.c
> index 24c9da6b41c2..ce3ccaac54d6 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -1061,7 +1061,6 @@ static int shmem_setattr(struct user_namespace *mnt_userns,
> {
> struct inode *inode = d_inode(dentry);
> struct shmem_inode_info *info = SHMEM_I(inode);
> - struct shmem_sb_info *sbinfo = SHMEM_SB(inode->i_sb);
> int error;
>
> error = setattr_prepare(&init_user_ns, dentry, attr);
> @@ -1097,24 +1096,6 @@ static int shmem_setattr(struct user_namespace *mnt_userns,
> if (oldsize > holebegin)
> unmap_mapping_range(inode->i_mapping,
> holebegin, 0, 1);
> -
> - /*
> - * Part of the huge page can be beyond i_size: subject
> - * to shrink under memory pressure.
> - */
> - if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) {
> - spin_lock(&sbinfo->shrinklist_lock);
> - /*
> - * _careful to defend against unlocked access to
> - * ->shrink_list in shmem_unused_huge_shrink()
> - */
> - if (list_empty_careful(&info->shrinklist)) {
> - list_add_tail(&info->shrinklist,
> - &sbinfo->shrinklist);
> - sbinfo->shrinklist_len++;
> - }
> - spin_unlock(&sbinfo->shrinklist_lock);
> - }
> }
> }
>
> --
> 2.26.2
>
Powered by blists - more mailing lists