[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1551011649-30103-1-git-send-email-kernelfans@gmail.com>
Date: Sun, 24 Feb 2019 20:34:03 +0800
From: Pingfan Liu <kernelfans@...il.com>
To: x86@...nel.org, linux-mm@...ck.org
Cc: Pingfan Liu <kernelfans@...il.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
"H. Peter Anvin" <hpa@...or.com>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Vlastimil Babka <vbabka@...e.cz>,
Mike Rapoport <rppt@...ux.vnet.ibm.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Mel Gorman <mgorman@...e.de>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Andy Lutomirski <luto@...nel.org>,
Andi Kleen <ak@...ux.intel.com>,
Petr Tesarik <ptesarik@...e.cz>,
Michal Hocko <mhocko@...e.com>,
Stephen Rothwell <sfr@...b.auug.org.au>,
Jonathan Corbet <corbet@....net>,
Nicholas Piggin <npiggin@...il.com>,
Daniel Vacek <neelx@...hat.com>, linux-kernel@...r.kernel.org
Subject: [PATCH 0/6] make memblock allocator utilize the node's fallback info
There are NUMA machines with memory-less node. At present page allocator builds the
full fallback info by build_zonelists(). But memblock allocator does not utilize
this info. And for memory-less node, memblock allocator just falls back "node 0",
without utilizing the nearest node. Unfortunately, the percpu section is allocated
by memblock, which is accessed frequently after bootup.
This series aims to improve the performance of per cpu section on memory-less node
by feeding node's fallback info to memblock allocator on x86, like we do for page
allocator. On other archs, it requires independent effort to setup node to cpumask
map ahead.
CC: Thomas Gleixner <tglx@...utronix.de>
CC: Ingo Molnar <mingo@...hat.com>
CC: Borislav Petkov <bp@...en8.de>
CC: "H. Peter Anvin" <hpa@...or.com>
CC: Dave Hansen <dave.hansen@...ux.intel.com>
CC: Vlastimil Babka <vbabka@...e.cz>
CC: Mike Rapoport <rppt@...ux.vnet.ibm.com>
CC: Andrew Morton <akpm@...ux-foundation.org>
CC: Mel Gorman <mgorman@...e.de>
CC: Joonsoo Kim <iamjoonsoo.kim@....com>
CC: Andy Lutomirski <luto@...nel.org>
CC: Andi Kleen <ak@...ux.intel.com>
CC: Petr Tesarik <ptesarik@...e.cz>
CC: Michal Hocko <mhocko@...e.com>
CC: Stephen Rothwell <sfr@...b.auug.org.au>
CC: Jonathan Corbet <corbet@....net>
CC: Nicholas Piggin <npiggin@...il.com>
CC: Daniel Vacek <neelx@...hat.com>
CC: linux-kernel@...r.kernel.org
Pingfan Liu (6):
mm/numa: extract the code of building node fall back list
mm/memblock: make full utilization of numa info
x86/numa: define numa_init_array() conditional on CONFIG_NUMA
x86/numa: concentrate the code of setting cpu to node map
x86/numa: push forward the setup of node to cpumask map
x86/numa: build node fallback info after setting up node to cpumask
map
arch/x86/include/asm/topology.h | 4 ---
arch/x86/kernel/setup.c | 2 ++
arch/x86/kernel/setup_percpu.c | 3 --
arch/x86/mm/numa.c | 40 +++++++++++-------------
include/linux/memblock.h | 3 ++
mm/memblock.c | 68 ++++++++++++++++++++++++++++++++++++++---
mm/page_alloc.c | 48 +++++++++++++++++------------
7 files changed, 114 insertions(+), 54 deletions(-)
--
2.7.4
Powered by blists - more mailing lists