[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e3dd8031-2091-4d65-7c76-0ec7283f92f5@redhat.com>
Date: Mon, 17 Jun 2019 09:26:07 +0200
From: David Hildenbrand <david@...hat.com>
To: Alastair D'Silva <alastair@....ibm.com>, alastair@...ilva.org
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Oscar Salvador <osalvador@...e.com>,
Michal Hocko <mhocko@...e.com>,
Pavel Tatashin <pasha.tatashin@...een.com>,
Wei Yang <richard.weiyang@...il.com>,
Juergen Gross <jgross@...e.com>, Qian Cai <cai@....pw>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...nel.org>,
Josh Poimboeuf <jpoimboe@...hat.com>,
Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>,
Jiri Kosina <jkosina@...e.cz>,
Peter Zijlstra <peterz@...radead.org>,
Mukesh Ojha <mojha@...eaurora.org>,
Arun KS <arunks@...eaurora.org>,
Mike Rapoport <rppt@...ux.vnet.ibm.com>,
Baoquan He <bhe@...hat.com>,
Logan Gunthorpe <logang@...tatee.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/5] mm: don't hide potentially null memmap pointer in
sparse_remove_one_section
On 17.06.19 06:36, Alastair D'Silva wrote:
> From: Alastair D'Silva <alastair@...ilva.org>
>
> By adding offset to memmap before passing it in to clear_hwpoisoned_pages,
> is hides a potentially null memmap from the null check inside
> clear_hwpoisoned_pages.
>
> This patch passes the offset to clear_hwpoisoned_pages instead, allowing
> memmap to successfully peform it's null check.
>
> Signed-off-by: Alastair D'Silva <alastair@...ilva.org>
> ---
> mm/sparse.c | 12 +++++++-----
> 1 file changed, 7 insertions(+), 5 deletions(-)
>
> diff --git a/mm/sparse.c b/mm/sparse.c
> index 104a79fedd00..66a99da9b11b 100644
> --- a/mm/sparse.c
> +++ b/mm/sparse.c
> @@ -746,12 +746,14 @@ int __meminit sparse_add_one_section(int nid, unsigned long start_pfn,
> kfree(usemap);
> __kfree_section_memmap(memmap, altmap);
> }
> +
> return ret;
> }
>
> #ifdef CONFIG_MEMORY_HOTREMOVE
> #ifdef CONFIG_MEMORY_FAILURE
> -static void clear_hwpoisoned_pages(struct page *memmap, int nr_pages)
> +static void clear_hwpoisoned_pages(struct page *memmap,
> + unsigned long map_offset, int nr_pages)
> {
> int i;
>
> @@ -767,7 +769,7 @@ static void clear_hwpoisoned_pages(struct page *memmap, int nr_pages)
> if (atomic_long_read(&num_poisoned_pages) == 0)
> return;
>
> - for (i = 0; i < nr_pages; i++) {
> + for (i = map_offset; i < nr_pages; i++) {
> if (PageHWPoison(&memmap[i])) {
> atomic_long_sub(1, &num_poisoned_pages);
> ClearPageHWPoison(&memmap[i]);
> @@ -775,7 +777,8 @@ static void clear_hwpoisoned_pages(struct page *memmap, int nr_pages)
> }
> }
> #else
> -static inline void clear_hwpoisoned_pages(struct page *memmap, int nr_pages)
> +static inline void clear_hwpoisoned_pages(struct page *memmap,
> + unsigned long map_offset, int nr_pages)
I somewhat dislike that map_offset modifies nr_pages internally.
I would prefer decoupling both and passing the actual number of pages to
clear instead:
clear_hwpoisoned_pages(memmap, map_offset,
PAGES_PER_SECTION - map_offset);
> {
> }
> #endif
> @@ -822,8 +825,7 @@ void sparse_remove_one_section(struct zone *zone, struct mem_section *ms,
> ms->pageblock_flags = NULL;
> }
>
> - clear_hwpoisoned_pages(memmap + map_offset,
> - PAGES_PER_SECTION - map_offset);
> + clear_hwpoisoned_pages(memmap, map_offset, PAGES_PER_SECTION);
> free_section_usemap(memmap, usemap, altmap);
> }
> #endif /* CONFIG_MEMORY_HOTREMOVE */
>
--
Thanks,
David / dhildenb
Powered by blists - more mailing lists