[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <749bcf72-efbd-d6c-db30-e9ff98242390@google.com>
Date: Thu, 5 Aug 2021 22:43:37 -0700 (PDT)
From: Hugh Dickins <hughd@...gle.com>
To: Yang Shi <shy828301@...il.com>
cc: Hugh Dickins <hughd@...gle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Shakeel Butt <shakeelb@...gle.com>,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
Miaohe Lin <linmiaohe@...wei.com>,
Mike Kravetz <mike.kravetz@...cle.com>,
Michal Hocko <mhocko@...e.com>,
Rik van Riel <riel@...riel.com>,
Christoph Hellwig <hch@...radead.org>,
Matthew Wilcox <willy@...radead.org>,
"Eric W. Biederman" <ebiederm@...ssion.com>,
Alexey Gladkov <legion@...nel.org>,
Chris Wilson <chris@...is-wilson.co.uk>,
Matthew Auld <matthew.auld@...el.com>,
Linux FS-devel Mailing List <linux-fsdevel@...r.kernel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
linux-api@...r.kernel.org, Linux MM <linux-mm@...ck.org>
Subject: Re: [PATCH 06/16] huge tmpfs: shmem_is_huge(vma, inode, index)
On Thu, 5 Aug 2021, Yang Shi wrote:
>
> By rereading the code, I think you are correct. Both cases do work
> correctly without leaking. And the !CONFIG_NUMA case may carry the
> huge page indefinitely.
>
> I think it is because khugepaged may collapse memory for another NUMA
> node in the next loop, so it doesn't make too much sense to carry the
> huge page, but it may be an optimization for !CONFIG_NUMA case.
Yes, that is its intention.
>
> However, as I mentioned in earlier email the new pcp implementation
> could cache THP now, so we might not need keep this convoluted logic
> anymore. Just free the page if collapse is failed then re-allocate
> THP. The carried THP might improve the success rate a little bit but I
> doubt how noticeable it would be, may be not worth for the extra
> complexity at all.
It would be great if the new pcp implementation is good enough to
get rid of khugepaged's confusing NUMA=y/NUMA=n differences; and all
the *hpage stuff too, I hope. That would be a welcome cleanup.
> > > Collapse failure is not uncommon and leaking huge pages gets noticed.
After writing that, I realized how I'm almost always testing a NUMA=y
kernel (though on non-NUMA machines), and seldom try the NUMA=n build.
So did so to check no leak, indeed; but was surprised, when comparing
vmstats, that the NUMA=n run had done 5 times as much thp_collapse_alloc
as the NUMA=y run. I've merely made a note to look into that one day:
maybe it was just a one-off oddity, or maybe the incrementing of stats
is wrong down one path or the other.
Hugh
Powered by blists - more mailing lists