[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180627233936.GE8970@localhost.localdomain>
Date: Thu, 28 Jun 2018 07:39:36 +0800
From: Baoquan He <bhe@...hat.com>
To: Pavel Tatashin <pasha.tatashin@...cle.com>
Cc: LKML <linux-kernel@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
dave.hansen@...el.com, pagupta@...hat.com,
Linux Memory Management List <linux-mm@...ck.org>,
kirill.shutemov@...ux.intel.com
Subject: Re: [PATCH v5 0/4] mm/sparse: Optimize memmap allocation during
sparse_init()
Hi Pavel,
On 06/27/18 at 01:47pm, Pavel Tatashin wrote:
> This work made me think why do we even have
> CONFIG_SPARSEMEM_ALLOC_MEM_MAP_TOGETHER ? This really should be the
> default behavior for all systems. Yet, it is enabled only on x86_64.
> We could clean up an already messy sparse.c if we removed this config,
> and enabled its path for all arches. We would not break anything
> because if we cannot allocate one large mmap_map we still fallback to
> allocating a page at a time the same as what happens when
> CONFIG_SPARSEMEM_ALLOC_MEM_MAP_TOGETHER=n.
Thanks for your idea.
Seems the common ARCHes all have ARCH_SPARSEMEM_ENABLE, such as x86,
arm/64, power, s390, mips, others don't have. For them, removing
CONFIG_SPARSEMEM_ALLOC_MEM_MAP_TOGETHER makes sense.
I will make a clean up patch to do this, but I can only test it on x86.
If test robot or other issues report issue on this clean up patch,
Andrew can help only pick the current 4 patches after updating, then
we can continue discussing the clean up patch. From the current code, it
should be OK to all ARCHes.
Thanks
Baoquan
Powered by blists - more mailing lists