lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Tue, 28 Jul 2020 08:11:52 +0300 From: Mike Rapoport <rppt@...nel.org> To: Andrew Morton <akpm@...ux-foundation.org> Cc: Andy Lutomirski <luto@...nel.org>, Benjamin Herrenschmidt <benh@...nel.crashing.org>, Borislav Petkov <bp@...en8.de>, Catalin Marinas <catalin.marinas@....com>, Christoph Hellwig <hch@....de>, Dave Hansen <dave.hansen@...ux.intel.com>, Ingo Molnar <mingo@...hat.com>, Marek Szyprowski <m.szyprowski@...sung.com>, Max Filippov <jcmvbkbc@...il.com>, Michael Ellerman <mpe@...erman.id.au>, Michal Simek <monstr@...str.eu>, Mike Rapoport <rppt@...ux.ibm.com>, Mike Rapoport <rppt@...nel.org>, Palmer Dabbelt <palmer@...belt.com>, Paul Mackerras <paulus@...ba.org>, Paul Walmsley <paul.walmsley@...ive.com>, Peter Zijlstra <peterz@...radead.org>, Russell King <linux@...linux.org.uk>, Stafford Horne <shorne@...il.com>, Thomas Gleixner <tglx@...utronix.de>, Will Deacon <will@...nel.org>, Yoshinori Sato <ysato@...rs.sourceforge.jp>, clang-built-linux@...glegroups.com, iommu@...ts.linux-foundation.org, linux-arm-kernel@...ts.infradead.org, linux-c6x-dev@...ux-c6x.org, linux-kernel@...r.kernel.org, linux-mips@...r.kernel.org, linux-mm@...ck.org, linux-riscv@...ts.infradead.org, linux-s390@...r.kernel.org, linux-sh@...r.kernel.org, linux-xtensa@...ux-xtensa.org, linuxppc-dev@...ts.ozlabs.org, openrisc@...ts.librecores.org, sparclinux@...r.kernel.org, uclinux-h8-devel@...ts.sourceforge.jp, x86@...nel.org Subject: [PATCH 14/15] x86/numa: remove redundant iteration over memblock.reserved From: Mike Rapoport <rppt@...ux.ibm.com> numa_clear_kernel_node_hotplug() function first traverses numa_meminfo regions to set node ID in memblock.reserved and than traverses memblock.reserved to update reserved_nodemask to include node IDs that were set in the first loop. Remove redundant traversal over memblock.reserved and update reserved_nodemask while iterating over numa_meminfo. Signed-off-by: Mike Rapoport <rppt@...ux.ibm.com> --- arch/x86/mm/numa.c | 26 ++++++++++---------------- 1 file changed, 10 insertions(+), 16 deletions(-) diff --git a/arch/x86/mm/numa.c b/arch/x86/mm/numa.c index 8ee952038c80..4078abd33938 100644 --- a/arch/x86/mm/numa.c +++ b/arch/x86/mm/numa.c @@ -498,31 +498,25 @@ static void __init numa_clear_kernel_node_hotplug(void) * and use those ranges to set the nid in memblock.reserved. * This will split up the memblock regions along node * boundaries and will set the node IDs as well. + * + * The nid will also be set in reserved_nodemask which is later + * used to clear MEMBLOCK_HOTPLUG flag. + * + * [ Note, when booting with mem=nn[kMG] or in a kdump kernel, + * numa_meminfo might not include all memblock.reserved + * memory ranges, because quirks such as trim_snb_memory() + * reserve specific pages for Sandy Bridge graphics. + * These ranges will remain with nid == MAX_NUMNODES. ] */ for (i = 0; i < numa_meminfo.nr_blks; i++) { struct numa_memblk *mb = numa_meminfo.blk + i; int ret; ret = memblock_set_node(mb->start, mb->end - mb->start, &memblock.reserved, mb->nid); + node_set(mb->nid, reserved_nodemask); WARN_ON_ONCE(ret); } - /* - * Now go over all reserved memblock regions, to construct a - * node mask of all kernel reserved memory areas. - * - * [ Note, when booting with mem=nn[kMG] or in a kdump kernel, - * numa_meminfo might not include all memblock.reserved - * memory ranges, because quirks such as trim_snb_memory() - * reserve specific pages for Sandy Bridge graphics. ] - */ - for_each_memblock(reserved, mb_region) { - int nid = memblock_get_region_node(mb_region); - - if (nid != MAX_NUMNODES) - node_set(nid, reserved_nodemask); - } - /* * Finally, clear the MEMBLOCK_HOTPLUG flag for all memory * belonging to the reserved node mask. -- 2.26.2
Powered by blists - more mailing lists