[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20171003130857.vohli6lnqj4tdmhl@dhcp22.suse.cz>
Date: Tue, 3 Oct 2017 15:08:57 +0200
From: Michal Hocko <mhocko@...nel.org>
To: Pavel Tatashin <pasha.tatashin@...cle.com>
Cc: linux-kernel@...r.kernel.org, sparclinux@...r.kernel.org,
linux-mm@...ck.org, linuxppc-dev@...ts.ozlabs.org,
linux-s390@...r.kernel.org, linux-arm-kernel@...ts.infradead.org,
x86@...nel.org, kasan-dev@...glegroups.com, borntraeger@...ibm.com,
heiko.carstens@...ibm.com, davem@...emloft.net,
willy@...radead.org, ard.biesheuvel@...aro.org,
mark.rutland@....com, will.deacon@....com, catalin.marinas@....com,
sam@...nborg.org, mgorman@...hsingularity.net,
steven.sistare@...cle.com, daniel.m.jordan@...cle.com,
bob.picco@...cle.com
Subject: Re: [PATCH v9 06/12] mm: zero struct pages during initialization
On Wed 20-09-17 16:17:08, Pavel Tatashin wrote:
> Add struct page zeroing as a part of initialization of other fields in
> __init_single_page().
>
> This single thread performance collected on: Intel(R) Xeon(R) CPU E7-8895
> v3 @ 2.60GHz with 1T of memory (268400646 pages in 8 nodes):
>
> BASE FIX
> sparse_init 11.244671836s 0.007199623s
> zone_sizes_init 4.879775891s 8.355182299s
> --------------------------
> Total 16.124447727s 8.362381922s
Hmm, this is confusing. This assumes that sparse_init doesn't zero pages
anymore, right? So these number depend on the last patch in the series?
> sparse_init is where memory for struct pages is zeroed, and the zeroing
> part is moved later in this patch into __init_single_page(), which is
> called from zone_sizes_init().
>
> Signed-off-by: Pavel Tatashin <pasha.tatashin@...cle.com>
> Reviewed-by: Steven Sistare <steven.sistare@...cle.com>
> Reviewed-by: Daniel Jordan <daniel.m.jordan@...cle.com>
> Reviewed-by: Bob Picco <bob.picco@...cle.com>
> Acked-by: Michal Hocko <mhocko@...e.com>
> ---
> include/linux/mm.h | 9 +++++++++
> mm/page_alloc.c | 1 +
> 2 files changed, 10 insertions(+)
>
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index f8c10d336e42..50b74d628243 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -94,6 +94,15 @@ extern int mmap_rnd_compat_bits __read_mostly;
> #define mm_forbids_zeropage(X) (0)
> #endif
>
> +/*
> + * On some architectures it is expensive to call memset() for small sizes.
> + * Those architectures should provide their own implementation of "struct page"
> + * zeroing by defining this macro in <asm/pgtable.h>.
> + */
> +#ifndef mm_zero_struct_page
> +#define mm_zero_struct_page(pp) ((void)memset((pp), 0, sizeof(struct page)))
> +#endif
> +
> /*
> * Default maximum number of active map areas, this limits the number of vmas
> * per mm struct. Users can overwrite this number by sysctl but there is a
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index a8dbd405ed94..4b630ee91430 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1170,6 +1170,7 @@ static void free_one_page(struct zone *zone,
> static void __meminit __init_single_page(struct page *page, unsigned long pfn,
> unsigned long zone, int nid)
> {
> + mm_zero_struct_page(page);
> set_page_links(page, zone, nid, pfn);
> init_page_count(page);
> page_mapcount_reset(page);
> --
> 2.14.1
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists