[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <60998a62-eebc-7c95-2d48-943fe62d09b9@oracle.com>
Date: Tue, 3 May 2022 14:49:19 -0700
From: Mike Kravetz <mike.kravetz@...cle.com>
To: Muchun Song <songmuchun@...edance.com>, corbet@....net,
akpm@...ux-foundation.org, mcgrof@...nel.org,
keescook@...omium.org, yzaikin@...gle.com, osalvador@...e.de,
david@...hat.com, masahiroy@...nel.org
Cc: linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, duanxiongchun@...edance.com, smuchun@...il.com
Subject: Re: [PATCH v9 1/4] mm: hugetlb_vmemmap: disable
hugetlb_optimize_vmemmap when struct page crosses page boundaries
On 4/29/22 05:18, Muchun Song wrote:
> If the size of "struct page" is not the power of two but with the feature
> of minimizing overhead of struct page associated with each HugeTLB is
> enabled, then the vmemmap pages of HugeTLB will be corrupted after
> remapping (panic is about to happen in theory). But this only exists when
> !CONFIG_MEMCG && !CONFIG_SLUB on x86_64. However, it is not a conventional
> configuration nowadays. So it is not a real word issue, just the result
> of a code review.
>
> But we cannot prevent anyone from configuring that combined configure.
> This hugetlb_optimize_vmemmap should be disable in this case to fix this
> issue.
>
> Signed-off-by: Muchun Song <songmuchun@...edance.com>
> ---
> mm/hugetlb_vmemmap.c | 12 ++++++------
> 1 file changed, 6 insertions(+), 6 deletions(-)
Thanks,
I think the runtime power_of_two checks here and in later patches are
much simpler than trying to do config/build time checks.
Reviewed-by: Mike Kravetz <mike.kravetz@...cle.com>
--
Mike Kravetz
>
> diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
> index 29554c6ef2ae..6254bb2d4ae5 100644
> --- a/mm/hugetlb_vmemmap.c
> +++ b/mm/hugetlb_vmemmap.c
> @@ -28,12 +28,6 @@ EXPORT_SYMBOL(hugetlb_optimize_vmemmap_key);
>
> static int __init hugetlb_vmemmap_early_param(char *buf)
> {
> - /* We cannot optimize if a "struct page" crosses page boundaries. */
> - if (!is_power_of_2(sizeof(struct page))) {
> - pr_warn("cannot free vmemmap pages because \"struct page\" crosses page boundaries\n");
> - return 0;
> - }
> -
> if (!buf)
> return -EINVAL;
>
> @@ -119,6 +113,12 @@ void __init hugetlb_vmemmap_init(struct hstate *h)
> if (!hugetlb_optimize_vmemmap_enabled())
> return;
>
> + if (!is_power_of_2(sizeof(struct page))) {
> + pr_warn_once("cannot optimize vmemmap pages because \"struct page\" crosses page boundaries\n");
> + static_branch_disable(&hugetlb_optimize_vmemmap_key);
> + return;
> + }
> +
> vmemmap_pages = (nr_pages * sizeof(struct page)) >> PAGE_SHIFT;
> /*
> * The head page is not to be freed to buddy allocator, the other tail
Powered by blists - more mailing lists