[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20191121151811.49742-1-fly@kernel.page>
Date: Thu, 21 Nov 2019 23:17:52 +0800
From: Pengfei Li <fly@...nel.page>
To: akpm@...ux-foundation.org
Cc: mgorman@...hsingularity.net, mhocko@...nel.org, vbabka@...e.cz,
cl@...ux.com, iamjoonsoo.kim@....com, guro@...com,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
Pengfei Li <fly@...nel.page>
Subject: [RFC v1 00/19] Modify zonelist to nodelist v1
Motivation
----------
Currently if we want to iterate through all the nodes we have to
traverse all the zones from the zonelist.
So in order to reduce the number of loops required to traverse node,
this series of patches modified the zonelist to nodelist.
Two new macros have been introduced:
1) for_each_node_nlist
2) for_each_node_nlist_nodemask
Benefit
-------
1. For a NUMA system with N nodes, each node has M zones, the number
of loops is reduced from N*M times to N times when traversing node.
2. The size of pg_data_t is much reduced.
Test Result
-----------
Currently I have only performed a simple page allocation benchmark
test on my laptop, and the results show that the performance of a
system with only one node is almost unaffected.
Others
------
Next I will do more performance testing and add it to the Test Result.
Since I don't currently have multiple node NUMA systems, I would be
grateful if anyone would like to test this series of patches.
I am still not sure this series of patches is on the right way so I
am sending this as an RFC.
Any comments are highly appreciated.
Pengfei Li (19):
mm, mmzone: modify zonelist to nodelist
mm, hugetlb: use for_each_node in dequeue_huge_page_nodemask()
mm, oom_kill: use for_each_node in constrained_alloc()
mm, slub: use for_each_node in get_any_partial()
mm, slab: use for_each_node in fallback_alloc()
mm, vmscan: use for_each_node in do_try_to_free_pages()
mm, vmscan: use first_node in throttle_direct_reclaim()
mm, vmscan: pass pgdat to wakeup_kswapd()
mm, vmscan: use for_each_node in shrink_zones()
mm, page_alloc: use for_each_node in wake_all_kswapds()
mm, mempolicy: use first_node in mempolicy_slab_node()
mm, mempolicy: use first_node in mpol_misplaced()
mm, page_alloc: use first_node in local_memory_node()
mm, compaction: rename compaction_zonelist_suitable
mm, mm_init: rename mminit_verify_zonelist
mm, page_alloc: cleanup build_zonelists
mm, memory_hotplug: cleanup online_pages()
kernel, sysctl: cleanup numa_zonelist_order
mm, mmzone: cleanup zonelist in comments
arch/hexagon/mm/init.c | 2 +-
arch/ia64/include/asm/topology.h | 2 +-
arch/x86/mm/numa.c | 2 +-
drivers/tty/sysrq.c | 2 +-
include/linux/compaction.h | 2 +-
include/linux/gfp.h | 18 +-
include/linux/mmzone.h | 249 +++++++++++++------------
include/linux/oom.h | 4 +-
include/linux/swap.h | 2 +-
include/trace/events/oom.h | 9 +-
init/main.c | 2 +-
kernel/cgroup/cpuset.c | 4 +-
kernel/sysctl.c | 8 +-
mm/compaction.c | 20 +-
mm/hugetlb.c | 21 +--
mm/internal.h | 13 +-
mm/memcontrol.c | 2 +-
mm/memory_hotplug.c | 24 +--
mm/mempolicy.c | 26 ++-
mm/mm_init.c | 74 +++++---
mm/mmzone.c | 30 ++-
mm/oom_kill.c | 16 +-
mm/page_alloc.c | 301 ++++++++++++++++---------------
mm/slab.c | 13 +-
mm/slub.c | 14 +-
mm/vmscan.c | 149 ++++++++-------
26 files changed, 518 insertions(+), 491 deletions(-)
--
2.23.0
Powered by blists - more mailing lists