[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201110094830.GA25373@linux>
Date: Tue, 10 Nov 2020 10:48:34 +0100
From: Oscar Salvador <osalvador@...e.de>
To: Muchun Song <songmuchun@...edance.com>
Cc: Jonathan Corbet <corbet@....net>,
Mike Kravetz <mike.kravetz@...cle.com>,
Thomas Gleixner <tglx@...utronix.de>, mingo@...hat.com,
bp@...en8.de, x86@...nel.org, hpa@...or.com,
dave.hansen@...ux.intel.com, luto@...nel.org,
Peter Zijlstra <peterz@...radead.org>, viro@...iv.linux.org.uk,
Andrew Morton <akpm@...ux-foundation.org>, paulmck@...nel.org,
mchehab+huawei@...nel.org, pawan.kumar.gupta@...ux.intel.com,
Randy Dunlap <rdunlap@...radead.org>, oneukum@...e.com,
anshuman.khandual@....com, jroedel@...e.de,
Mina Almasry <almasrymina@...gle.com>,
David Rientjes <rientjes@...gle.com>,
Matthew Wilcox <willy@...radead.org>,
Michal Hocko <mhocko@...e.com>,
Xiongchun duan <duanxiongchun@...edance.com>,
linux-doc@...r.kernel.org, LKML <linux-kernel@...r.kernel.org>,
Linux Memory Management List <linux-mm@...ck.org>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>
Subject: Re: [External] Re: [PATCH v3 09/21] mm/hugetlb: Free the vmemmap
pages associated with each hugetlb page
On Tue, Nov 10, 2020 at 02:40:54PM +0800, Muchun Song wrote:
> Only the first HugeTLB page should split the PMD to PTE. The other 63
> HugeTLB pages
> do not need to split. Here I want to make sure we are the first.
I think terminology is loosing me here.
Say you allocate a 2MB HugeTLB page at ffffea0004100000.
The vmemmap range that the represents this is ffffea0004000000 - ffffea0004200000.
That is a 2MB chunk PMD-mapped.
So, in order to free some of those vmemmap pages, we need to break down
that area, remapping it to PTE-based.
I know what you mean, but we are not really splitting hugetlg pages, but
the memmap range they are represented with.
About:
"Only the first HugeTLB page should split the PMD to PTE. The other 63
HugeTLB pages
do not need to split. Here I want to make sure we are the first."
That only refers to gigantic pages, right?
> > > +static void free_huge_page_vmemmap(struct hstate *h, struct page *head)
> > > +{
> > > + pmd_t *pmd;
> > > + spinlock_t *ptl;
> > > + LIST_HEAD(free_pages);
> > > +
> > > + if (!free_vmemmap_pages_per_hpage(h))
> > > + return;
> > > +
> > > + pmd = vmemmap_to_pmd(head);
> > > + ptl = vmemmap_pmd_lock(pmd);
> > > + if (vmemmap_pmd_huge(pmd)) {
> > > + VM_BUG_ON(!pgtable_pages_to_prealloc_per_hpage(h));
> >
> > I think that checking for free_vmemmap_pages_per_hpage is enough.
> > In the end, pgtable_pages_to_prealloc_per_hpage uses free_vmemmap_pages_per_hpage.
>
> The free_vmemmap_pages_per_hpage is not enough. See the comments above.
My comment was about the VM_BUG_ON.
--
Oscar Salvador
SUSE L3
Powered by blists - more mailing lists