[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <dc77d433-b5f0-0f4a-a4e9-f888b079618a@redhat.com>
Date: Fri, 20 Nov 2020 10:16:43 +0100
From: David Hildenbrand <david@...hat.com>
To: Muchun Song <songmuchun@...edance.com>, corbet@....net,
mike.kravetz@...cle.com, tglx@...utronix.de, mingo@...hat.com,
bp@...en8.de, x86@...nel.org, hpa@...or.com,
dave.hansen@...ux.intel.com, luto@...nel.org, peterz@...radead.org,
viro@...iv.linux.org.uk, akpm@...ux-foundation.org,
paulmck@...nel.org, mchehab+huawei@...nel.org,
pawan.kumar.gupta@...ux.intel.com, rdunlap@...radead.org,
oneukum@...e.com, anshuman.khandual@....com, jroedel@...e.de,
almasrymina@...gle.com, rientjes@...gle.com, willy@...radead.org,
osalvador@...e.de, mhocko@...e.com, song.bao.hua@...ilicon.com
Cc: duanxiongchun@...edance.com, linux-doc@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
linux-fsdevel@...r.kernel.org
Subject: Re: [PATCH v5 21/21] mm/hugetlb: Disable freeing vmemmap if struct
page size is not power of two
On 20.11.20 07:43, Muchun Song wrote:
> We only can free the unused vmemmap to the buddy system when the
> size of struct page is a power of two.
>
> Signed-off-by: Muchun Song <songmuchun@...edance.com>
> ---
> mm/hugetlb_vmemmap.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
> index c3b3fc041903..7bb749a3eea2 100644
> --- a/mm/hugetlb_vmemmap.c
> +++ b/mm/hugetlb_vmemmap.c
> @@ -671,7 +671,8 @@ void __init hugetlb_vmemmap_init(struct hstate *h)
> unsigned int order = huge_page_order(h);
> unsigned int vmemmap_pages;
>
> - if (hugetlb_free_vmemmap_disabled) {
> + if (hugetlb_free_vmemmap_disabled ||
> + !is_power_of_2(sizeof(struct page))) {
> pr_info("disable free vmemmap pages for %s\n", h->name);
> return;
> }
>
This patch should be merged into the original patch that introduced
vmemmap freeing.
--
Thanks,
David / dhildenb
Powered by blists - more mailing lists