[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZVyu1gAinLEtg5RR@li-2b55cdcc-350b-11b2-a85c-a78bff51fc11.ibm.com>
Date: Tue, 21 Nov 2023 14:21:26 +0100
From: Sumanth Korikkar <sumanthk@...ux.ibm.com>
To: David Hildenbrand <david@...hat.com>
Cc: Gerald Schaefer <gerald.schaefer@...ux.ibm.com>,
linux-mm <linux-mm@...ck.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Oscar Salvador <osalvador@...e.de>,
Michal Hocko <mhocko@...e.com>,
"Aneesh Kumar K.V" <aneesh.kumar@...ux.ibm.com>,
Anshuman Khandual <anshuman.khandual@....com>,
Alexander Gordeev <agordeev@...ux.ibm.com>,
Heiko Carstens <hca@...ux.ibm.com>,
Vasily Gorbik <gor@...ux.ibm.com>,
linux-s390 <linux-s390@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 0/8] implement "memmap on memory" feature on s390
On Tue, Nov 21, 2023 at 02:13:22PM +0100, Sumanth Korikkar wrote:
> Approach 2:
> ===========
> Shouldnt kasan zero shadow mapping performed first before
> accessing/initializing memmap via page_init_poisining()? If that is
> true, then it is a problem for all architectures and should could be
> fixed like:
>
>
> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> index 7a5fc89a8652..eb3975740537 100644
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -1093,6 +1093,7 @@ int mhp_init_memmap_on_memory(unsigned long pfn, unsigned long nr_pages,
> if (ret)
> return ret;
>
> + page_init_poison(pfn_to_page(pfn), sizeof(struct page) * nr_pages);
> move_pfn_range_to_zone(zone, pfn, nr_pages, NULL, MIGRATE_UNMOVABLE);
>
> for (i = 0; i < nr_pages; i++)
> diff --git a/mm/sparse.c b/mm/sparse.c
> index 77d91e565045..4ddf53f52075 100644
> --- a/mm/sparse.c
> +++ b/mm/sparse.c
> @@ -906,8 +906,11 @@ int __meminit sparse_add_section(int nid, unsigned long start_pfn,
> /*
> * Poison uninitialized struct pages in order to catch invalid flags
> * combinations.
> + * For altmap, do this later when onlining the memory, as it might
> + * not be accessible at this point.
> */
> - page_init_poison(memmap, sizeof(struct page) * nr_pages);
> + if (!altmap)
> + page_init_poison(memmap, sizeof(struct page) * nr_pages);
>
> ms = __nr_to_section(section_nr);
> set_section_nid(section_nr, nid);
>
>
>
> Also, if this approach is taken, should page_init_poison() be performed
> with cond_resched() as mentioned in commit d33695b16a9f
> ("mm/memory_hotplug: poison memmap in remove_pfn_range_from_zone()") ?
Sorry, wrong commit id.
should page_init_poison() be performed with cond_resched() as mentioned in
Commit b7e3debdd040 ("mm/memory_hotplug.c: fix false softlockup
during pfn range removal") ?
Thanks
>
> Opinions?
>
> Thank you
Powered by blists - more mailing lists