lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 5 May 2015 15:31:02 +0100
From:	Mel Gorman <mgorman@...e.de>
To:	Waiman Long <waiman.long@...com>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	Nathan Zimmer <nzimmer@....com>,
	Dave Hansen <dave.hansen@...el.com>,
	Scott Norton <scott.norton@...com>,
	Daniel J Blueman <daniel@...ascale.com>,
	Linux-MM <linux-mm@...ck.org>,
	LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 0/13] Parallel struct page initialisation v4

On Tue, May 05, 2015 at 09:55:52AM -0400, Waiman Long wrote:
> On 05/05/2015 06:45 AM, Mel Gorman wrote:
> >On Mon, May 04, 2015 at 02:30:46PM -0700, Andrew Morton wrote:
> >>>Before the patch, the boot time from elilo prompt to ssh login was 694s.
> >>>After the patch, the boot up time was 346s, a saving of 348s (about 50%).
> >>Having to guesstimate the amount of memory which is needed for a
> >>successful boot will be painful.  Any number we choose will be wrong
> >>99% of the time.
> >>
> >>If the kswapd threads have started, all we need to do is to wait: take
> >>a little nap in the allocator's page==NULL slowpath.
> >>
> >>I'm not seeing any reason why we can't start kswapd much earlier -
> >>right at the start of do_basic_setup()?
> >It doesn't even have to be kswapd, it just should be a thread pinned to
> >a done. The difficulty is that dealing with the system hashes means the
> >initialisation has to happen before vfs_caches_init_early() when there is
> >no scheduler. Those allocations could be delayed further but then there is
> >the possibility that the allocations would not be contiguous and they'd
> >have to rely on CMA to make the attempt. That potentially alters the
> >performance of the large system hashes at run time.
> >
> >We can scale the amount initialised with memory sizes relatively easy.
> >This boots on the same 1TB machine I was testing before but that is
> >hardly a surprise.
> >
> >---8<---
> >mm: meminit: Take into account that large system caches scale linearly with memory
> >
> >Waiman Long reported a 24TB machine triggered an OOM as parallel memory
> >initialisation deferred too much memory for initialisation. The likely
> >consumer of this memory was large system hashes that scale with memory
> >size. This patch initialises at least 2G per node but scales the amount
> >initialised for larger systems.
> >
> >Signed-off-by: Mel Gorman<mgorman@...e.de>
> >---
> >  mm/page_alloc.c | 15 +++++++++++++--
> >  1 file changed, 13 insertions(+), 2 deletions(-)
> >
> >diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> >index 598f78d6544c..f7cc6c9fb909 100644
> >--- a/mm/page_alloc.c
> >+++ b/mm/page_alloc.c
> >@@ -266,15 +266,16 @@ static inline bool early_page_nid_uninitialised(unsigned long pfn, int nid)
> >   */
> >  static inline bool update_defer_init(pg_data_t *pgdat,
> >  				unsigned long pfn, unsigned long zone_end,
> >+				unsigned long max_initialise,
> >  				unsigned long *nr_initialised)
> >  {
> >  	/* Always populate low zones for address-contrained allocations */
> >  	if (zone_end<  pgdat_end_pfn(pgdat))
> >  		return true;
> >
> >-	/* Initialise at least 2G of the highest zone */
> >+	/* Initialise at least the requested amount in the highest zone */
> >  	(*nr_initialised)++;
> >-	if (*nr_initialised>  (2UL<<  (30 - PAGE_SHIFT))&&
> >+	if ((*nr_initialised>  max_initialise)&&
> >  	(pfn&  (PAGES_PER_SECTION - 1)) == 0) {
> >  		pgdat->first_deferred_pfn = pfn;
> >  		return false;
> >@@ -299,6 +300,7 @@ static inline bool early_page_nid_uninitialised(unsigned long pfn, int nid)
> >
> >  static inline bool update_defer_init(pg_data_t *pgdat,
> >  				unsigned long pfn, unsigned long zone_end,
> >+				unsigned long max_initialise,
> >  				unsigned long *nr_initialised)
> >  {
> >  	return true;
> >@@ -4457,11 +4459,19 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone,
> >  	unsigned long end_pfn = start_pfn + size;
> >  	unsigned long pfn;
> >  	struct zone *z;
> >+	unsigned long max_initialise;
> >  	unsigned long nr_initialised = 0;
> >
> >  	if (highest_memmap_pfn<  end_pfn - 1)
> >  		highest_memmap_pfn = end_pfn - 1;
> >
> >+	/*
> >+	 * Initialise at least 2G of a node but also take into account that
> >+	 * two large system hashes that can take up an 8th of memory.
> >+	 */
> >+	max_initialise = min(2UL<<  (30 - PAGE_SHIFT),
> >+			(pgdat->node_spanned_pages>>  3));
> >+
> 
> I think you may be pre-allocating too much memory here. On the 24-TB
> machine, the size of the dentry and inode hash tables were 16G each.
> So the ratio is about is about 32G/24T = 0.13%. I think a shift
> factor of (>> 8) which is about 0.39% should be more than enough.

I was taking the most pessimistic value possible to match where those
hashes currently get allocated from so that the locality does not change
after the series is applied. Can you try both (>> 3) and (>> 8) and see
do both work and if so, what the timing is?

> For the 24TB machine, that means a preallocated memory of 96+4G
> which should be even more than the 64+4G in the modified kernel that
> I used. At the same time, I think we can also set the minimum to 1G
> or even 0.5G for better performance for systems that have many CPUs,
> but not as much memory per node.
> 

I think the benefit there is going to be marginal except maybe on machines
where remote accesses are extremely costly.

-- 
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ