[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAMZfGtXksc0Ugasqn4czpwHunsGR5nfxVO_iLsrLrnYMsgieYw@mail.gmail.com>
Date: Thu, 14 Jan 2021 21:05:06 +0800
From: Muchun Song <songmuchun@...edance.com>
To: Oscar Salvador <osalvador@...e.de>
Cc: Mike Kravetz <mike.kravetz@...cle.com>,
Jonathan Corbet <corbet@....net>,
Thomas Gleixner <tglx@...utronix.de>, mingo@...hat.com,
bp@...en8.de, x86@...nel.org, hpa@...or.com,
dave.hansen@...ux.intel.com, luto@...nel.org,
Peter Zijlstra <peterz@...radead.org>, viro@...iv.linux.org.uk,
Andrew Morton <akpm@...ux-foundation.org>, paulmck@...nel.org,
mchehab+huawei@...nel.org, pawan.kumar.gupta@...ux.intel.com,
Randy Dunlap <rdunlap@...radead.org>, oneukum@...e.com,
anshuman.khandual@....com, jroedel@...e.de,
Mina Almasry <almasrymina@...gle.com>,
David Rientjes <rientjes@...gle.com>,
Matthew Wilcox <willy@...radead.org>,
Michal Hocko <mhocko@...e.com>,
"Song Bao Hua (Barry Song)" <song.bao.hua@...ilicon.com>,
David Hildenbrand <david@...hat.com>,
HORIGUCHI NAOYA(堀口 直也)
<naoya.horiguchi@....com>,
Xiongchun duan <duanxiongchun@...edance.com>,
linux-doc@...r.kernel.org, LKML <linux-kernel@...r.kernel.org>,
Linux Memory Management List <linux-mm@...ck.org>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>
Subject: Re: [External] Re: [PATCH v12 04/13] mm/hugetlb: Free the vmemmap
pages associated with each HugeTLB page
On Thu, Jan 14, 2021 at 7:52 PM Oscar Salvador <osalvador@...e.de> wrote:
>
> On Thu, Jan 14, 2021 at 06:54:30PM +0800, Muchun Song wrote:
> > I think this approach may be only suitable for generic huge page only.
> > So we can implement it only for huge page.
> >
> > Hi Oscar,
> >
> > What's your opinion about this?
>
> I tried something like:
>
> static void vmemmap_pte_range(pmd_t *pmd, unsigned long addr,
> unsigned long end,
> struct vmemmap_remap_walk *walk)
> {
> pte_t *pte;
>
> pte = pte_offset_kernel(pmd, addr);
>
> if (!walk->reuse_page) {
> BUG_ON(pte_none(*pte));
>
> walk->reuse_page = pte_page(*pte++);
> addr = walk->remap_start;
> }
>
> for (; addr != end; addr += PAGE_SIZE, pte++) {
> BUG_ON(pte_none(*pte));
>
> walk->remap_pte(pte, addr, walk);
> }
> }
>
> void vmemmap_remap_free(unsigned long start, unsigned long end,
> unsigned long reuse)
> {
> LIST_HEAD(vmemmap_pages);
> struct vmemmap_remap_walk walk = {
> .remap_pte = vmemmap_remap_pte,
> .reuse_addr = reuse,
> .remap_start = start,
> .vmemmap_pages = &vmemmap_pages,
> };
>
> BUG_ON(start != reuse + PAGE_SIZE);
>
> vmemmap_remap_range(reuse, end, &walk);
> free_vmemmap_page_list(&vmemmap_pages);
> }
>
> but it might overcomplicate things and I am not sure it is any better.
> So I am fine with keeping it as is.
> Should another user come in the future, we can always revisit.
> Maybe just add a little comment in vmemmap_pte_range(), explaining while we
> are "+= PAGE_SIZE" for address and I would like to see a comment in
> vmemmap_remap_free why the BUG_ON and more important what it is checking.
OK, I will add some comments to explain why we do this in
vmemmap_remap_free and vmemmap_remap_free. Thanks.
>
> --
> Oscar Salvador
> SUSE L3
Powered by blists - more mailing lists