lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 19 Nov 2020 15:37:25 -0800
From:   Mike Kravetz <mike.kravetz@...cle.com>
To:     Muchun Song <songmuchun@...edance.com>, corbet@....net,
        tglx@...utronix.de, mingo@...hat.com, bp@...en8.de, x86@...nel.org,
        hpa@...or.com, dave.hansen@...ux.intel.com, luto@...nel.org,
        peterz@...radead.org, viro@...iv.linux.org.uk,
        akpm@...ux-foundation.org, paulmck@...nel.org,
        mchehab+huawei@...nel.org, pawan.kumar.gupta@...ux.intel.com,
        rdunlap@...radead.org, oneukum@...e.com, anshuman.khandual@....com,
        jroedel@...e.de, almasrymina@...gle.com, rientjes@...gle.com,
        willy@...radead.org, osalvador@...e.de, mhocko@...e.com
Cc:     duanxiongchun@...edance.com, linux-doc@...r.kernel.org,
        linux-kernel@...r.kernel.org, linux-mm@...ck.org,
        linux-fsdevel@...r.kernel.org
Subject: Re: [PATCH v4 05/21] mm/hugetlb: Introduce pgtable allocation/freeing
 helpers

On 11/13/20 2:59 AM, Muchun Song wrote:
> On x86_64, vmemmap is always PMD mapped if the machine has hugepages
> support and if we have 2MB contiguos pages and PMD aligned. If we want
                             contiguous              alignment
> to free the unused vmemmap pages, we have to split the huge pmd firstly.
> So we should pre-allocate pgtable to split PMD to PTE.
> 
> Signed-off-by: Muchun Song <songmuchun@...edance.com>
> ---
>  mm/hugetlb_vmemmap.c | 73 ++++++++++++++++++++++++++++++++++++++++++++++++++++
>  mm/hugetlb_vmemmap.h | 12 +++++++++
>  2 files changed, 85 insertions(+)

Thanks for the cleanup.

Oscar made some other comments.  I only have one additional minor comment
below.

With those minor cleanups,
Acked-by: Mike Kravetz <mike.kravetz@...cle.com>

> diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
...
> +int vmemmap_pgtable_prealloc(struct hstate *h, struct page *page)
> +{
> +	unsigned int nr = pgtable_pages_to_prealloc_per_hpage(h);
> +
> +	/* Store preallocated pages on huge page lru list */

Let's expland the above comment to something like this:

	/*
	 * Use the huge page lru list to temporarily store the preallocated
	 * pages.  The preallocated pages are used and the list is emptied
	 * before the huge page is put into use.  When the huge page is put
	 * into use by prep_new_huge_page() the list will be reinitialized.
	 */

> +	INIT_LIST_HEAD(&page->lru);
> +
> +	while (nr--) {
> +		pte_t *pte_p;
> +
> +		pte_p = pte_alloc_one_kernel(&init_mm);
> +		if (!pte_p)
> +			goto out;
> +		list_add(&virt_to_page(pte_p)->lru, &page->lru);
> +	}
> +
> +	return 0;
> +out:
> +	vmemmap_pgtable_free(page);
> +	return -ENOMEM;
> +}

-- 
Mike Kravetz

Powered by blists - more mailing lists