lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-Id: <20170914223517.8242-7-pasha.tatashin@oracle.com> Date: Thu, 14 Sep 2017 18:35:12 -0400 From: Pavel Tatashin <pasha.tatashin@...cle.com> To: linux-kernel@...r.kernel.org, sparclinux@...r.kernel.org, linux-mm@...ck.org, linuxppc-dev@...ts.ozlabs.org, linux-s390@...r.kernel.org, linux-arm-kernel@...ts.infradead.org, x86@...nel.org, kasan-dev@...glegroups.com, borntraeger@...ibm.com, heiko.carstens@...ibm.com, davem@...emloft.net, willy@...radead.org, mhocko@...nel.org, ard.biesheuvel@...aro.org, will.deacon@....com, catalin.marinas@....com, sam@...nborg.org, mgorman@...hsingularity.net, Steven.Sistare@...cle.com, daniel.m.jordan@...cle.com, bob.picco@...cle.com Subject: [PATCH v8 06/11] mm: zero struct pages during initialization Add struct page zeroing as a part of initialization of other fields in __init_single_page(). This single thread performance collected on: Intel(R) Xeon(R) CPU E7-8895 v3 @ 2.60GHz with 1T of memory (268400646 pages in 8 nodes): BASE FIX sparse_init 11.244671836s 0.007199623s zone_sizes_init 4.879775891s 8.355182299s -------------------------- Total 16.124447727s 8.362381922s sparse_init is where memory for struct pages is zeroed, and the zeroing part is moved later in this patch into __init_single_page(), which is called from zone_sizes_init(). Signed-off-by: Pavel Tatashin <pasha.tatashin@...cle.com> Reviewed-by: Steven Sistare <steven.sistare@...cle.com> Reviewed-by: Daniel Jordan <daniel.m.jordan@...cle.com> Reviewed-by: Bob Picco <bob.picco@...cle.com> Acked-by: Michal Hocko <mhocko@...e.com> --- include/linux/mm.h | 9 +++++++++ mm/page_alloc.c | 1 + 2 files changed, 10 insertions(+) diff --git a/include/linux/mm.h b/include/linux/mm.h index f8c10d336e42..50b74d628243 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -94,6 +94,15 @@ extern int mmap_rnd_compat_bits __read_mostly; #define mm_forbids_zeropage(X) (0) #endif +/* + * On some architectures it is expensive to call memset() for small sizes. + * Those architectures should provide their own implementation of "struct page" + * zeroing by defining this macro in <asm/pgtable.h>. + */ +#ifndef mm_zero_struct_page +#define mm_zero_struct_page(pp) ((void)memset((pp), 0, sizeof(struct page))) +#endif + /* * Default maximum number of active map areas, this limits the number of vmas * per mm struct. Users can overwrite this number by sysctl but there is a diff --git a/mm/page_alloc.c b/mm/page_alloc.c index a8dbd405ed94..4b630ee91430 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1170,6 +1170,7 @@ static void free_one_page(struct zone *zone, static void __meminit __init_single_page(struct page *page, unsigned long pfn, unsigned long zone, int nid) { + mm_zero_struct_page(page); set_page_links(page, zone, nid, pfn); init_page_count(page); page_mapcount_reset(page); -- 2.14.1
Powered by blists - more mailing lists