[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b7c16e3f-d906-1a11-dbd5-dc4199d70821@oracle.com>
Date: Thu, 12 Nov 2020 16:35:58 -0800
From: Mike Kravetz <mike.kravetz@...cle.com>
To: Muchun Song <songmuchun@...edance.com>
Cc: Jonathan Corbet <corbet@....net>,
Thomas Gleixner <tglx@...utronix.de>, mingo@...hat.com,
bp@...en8.de, x86@...nel.org, hpa@...or.com,
dave.hansen@...ux.intel.com, luto@...nel.org,
Peter Zijlstra <peterz@...radead.org>, viro@...iv.linux.org.uk,
Andrew Morton <akpm@...ux-foundation.org>, paulmck@...nel.org,
mchehab+huawei@...nel.org, pawan.kumar.gupta@...ux.intel.com,
Randy Dunlap <rdunlap@...radead.org>, oneukum@...e.com,
anshuman.khandual@....com, jroedel@...e.de,
Mina Almasry <almasrymina@...gle.com>,
David Rientjes <rientjes@...gle.com>,
Matthew Wilcox <willy@...radead.org>,
Oscar Salvador <osalvador@...e.de>,
Michal Hocko <mhocko@...e.com>,
Xiongchun duan <duanxiongchun@...edance.com>,
linux-doc@...r.kernel.org, LKML <linux-kernel@...r.kernel.org>,
Linux Memory Management List <linux-mm@...ck.org>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>
Subject: Re: [External] Re: [PATCH v3 05/21] mm/hugetlb: Introduce pgtable
allocation/freeing helpers
On 11/10/20 7:41 PM, Muchun Song wrote:
> On Wed, Nov 11, 2020 at 8:47 AM Mike Kravetz <mike.kravetz@...cle.com> wrote:
>>
>> On 11/8/20 6:10 AM, Muchun Song wrote:
>> I am reading the code incorrectly it does not appear page->lru (of the huge
>> page) is being used for this purpose. Is that correct?
>>
>> If it is correct, would using page->lru of the huge page make this code
>> simpler? I am just missing the reason why you are using
>> page_huge_pte(page)->lru
>
> For 1GB HugeTLB pages, we should pre-allocate more than one page
> table. So I use a linked list. The page_huge_pte(page) is the list head.
> Because the page->lru shares storage with page->pmd_huge_pte.
Sorry, but I do not understand the statement page->lru shares storage with
page->pmd_huge_pte. Are you saying they are both in head struct page of
the huge page?
Here is what I was suggesting. If we just use page->lru for the list
then vmemmap_pgtable_prealloc() could be coded like the following:
static int vmemmap_pgtable_prealloc(struct hstate *h, struct page *page)
{
struct page *pte_page, *t_page;
unsigned int nr = pgtable_pages_to_prealloc_per_hpage(h);
if (!nr)
return 0;
/* Store preallocated pages on huge page lru list */
INIT_LIST_HEAD(&page->lru);
while (nr--) {
pte_t *pte_p;
pte_p = pte_alloc_one_kernel(&init_mm);
if (!pte_p)
goto out;
list_add(&virt_to_page(pte_p)->lru, &page->lru);
}
return 0;
out:
list_for_each_entry_safe(pte_page, t_page, &page->lru, lru)
pte_free_kernel(&init_mm, page_to_virt(pte_page));
return -ENOMEM;
}
By doing this we could eliminate the routines,
vmemmap_pgtable_init()
vmemmap_pgtable_deposit()
vmemmap_pgtable_withdraw()
and simply use the list manipulation routines.
To me, that looks simpler than the proposed code in this patch.
--
Mike Kravetz
Powered by blists - more mailing lists