[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <eb5dbddd-afb6-2473-fa76-a4fabf62fb89@redhat.com>
Date: Wed, 30 Oct 2019 14:31:48 +0100
From: David Hildenbrand <david@...hat.com>
To: Vincent Whitchurch <vincent.whitchurch@...s.com>,
akpm@...ux-foundation.org
Cc: osalvador@...e.de, mhocko@...e.com, pasha.tatashin@...cle.com,
linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Vincent Whitchurch <rabinv@...s.com>
Subject: Re: [PATCH] mm/sparse: Consistently do not zero memmap
On 30.10.19 14:11, Vincent Whitchurch wrote:
> sparsemem without VMEMMAP has two allocation paths to allocate the
> memory needed for its memmap (done in sparse_mem_map_populate()).
>
> In one allocation path (sparse_buffer_alloc() succeeds), the memory is
> not zeroed (since it was previously allocated with
> memblock_alloc_try_nid_raw()).
>
> In the other allocation path (sparse_buffer_alloc() fails and
> sparse_mem_map_populate() falls back to memblock_alloc_try_nid()), the
> memory is zeroed.
>
> AFAICS this difference does not appear to be on purpose. If the code is
> supposed to work with non-initialized memory (__init_single_page() takes
> care of zeroing the struct pages which are actually used), we should
> consistently not zero the memory, to avoid masking bugs.
I agree
Acked-by: David Hildenbrand <david@...hat.com>
>
> (I noticed this because on my ARM64 platform, with 1 GiB of memory the
> first [and only] section is allocated from the zeroing path while with
> 2 GiB of memory the first 1 GiB section is allocated from the
> non-zeroing path.)
>
> Signed-off-by: Vincent Whitchurch <vincent.whitchurch@...s.com>
> ---
> mm/sparse.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/sparse.c b/mm/sparse.c
> index f6891c1992b1..01e467adc219 100644
> --- a/mm/sparse.c
> +++ b/mm/sparse.c
> @@ -458,7 +458,7 @@ struct page __init *__populate_section_memmap(unsigned long pfn,
> if (map)
> return map;
>
> - map = memblock_alloc_try_nid(size,
> + map = memblock_alloc_try_nid_raw(size,
> PAGE_SIZE, addr,
> MEMBLOCK_ALLOC_ACCESSIBLE, nid);
> if (!map)
>
--
Thanks,
David / dhildenb
Powered by blists - more mailing lists