lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 12 Mar 2008 11:39:03 +0000
From:	Mel Gorman <mel@....ul.ie>
To:	Ingo Molnar <mingo@...e.hu>
Cc:	Yinghai Lu <yhlu.kernel@...il.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Christoph Lameter <clameter@....com>,
	kernel list <linux-kernel@...r.kernel.org>,
	Andy Whitcroft <apw@...dowen.org>
Subject: Re: [PATCH] mm: make mem_map allocation continuous.

On (11/03/08 09:14), Ingo Molnar didst pronounce:
> 
> * Yinghai Lu <yhlu.kernel@...il.com> wrote:
> 
> > [PATCH] mm: make mem_map allocation continuous.
> > 
> > vmemmap allocation current got
> >  [ffffe20000000000-ffffe200001fffff] PMD ->ffff810001400000 on node 0
> >  [ffffe20000200000-ffffe200003fffff] PMD ->ffff810001800000 on node 0
> >  [ffffe20000400000-ffffe200005fffff] PMD ->ffff810001c00000 on node 0
> >  [ffffe20000600000-ffffe200007fffff] PMD ->ffff810002000000 on node 0
> >  [ffffe20000800000-ffffe200009fffff] PMD ->ffff810002400000 on node 0
> > ...
> > 
> > there is 2M hole between them.
> > 
> > the rootcause is that usemap (24 bytes) will be allocated after every 2M
> > mem_map. and it will push next vmemmap (2M) to next align (2M).
> > 
> > solution:
> > try to allocate mem_map continously.
> > 
> > after patch, will get
> >  [ffffe20000000000-ffffe200001fffff] PMD ->ffff810001400000 on node 0
> >  [ffffe20000200000-ffffe200003fffff] PMD ->ffff810001600000 on node 0
> >  [ffffe20000400000-ffffe200005fffff] PMD ->ffff810001800000 on node 0
> >  [ffffe20000600000-ffffe200007fffff] PMD ->ffff810001a00000 on node 0
> >  [ffffe20000800000-ffffe200009fffff] PMD ->ffff810001c00000 on node 0
> > ...
> > and usemap will share in page because of they are allocated continuously too.
> > sparse_early_usemap_alloc: usemap = ffff810024e00000 size = 24
> > sparse_early_usemap_alloc: usemap = ffff810024e00080 size = 24
> > sparse_early_usemap_alloc: usemap = ffff810024e00100 size = 24
> > sparse_early_usemap_alloc: usemap = ffff810024e00180 size = 24
> > ...
> > 
> > so we make the bootmem allocation more compact and use less memory for usemap.
> > 
> > Signed-off-by: Yinghai Lu <yhlu.kernel@...il.com>
> 
> very nice fix!
> 

Agreed, good work.

> i suspect this patch should go via -mm.
> 
> >  	usemap = alloc_bootmem_node(NODE_DATA(nid), usemap_size());
> > +	printk(KERN_INFO "sparse_early_usemap_alloc: usemap = %p size = %ld\n", usemap, usemap_size());
> 
> this should be in a separate patch.
> 

Should this be KERN_DEBUG instead of KERN_INFO?

I don't have the original mail because I got unsubscribed from the lists
a few days ago and didn't notice (have been having mail issues) so
pardon awkward cut & pastes

> +/* section_map pointer array is 64k */
> +static __initdata struct page *section_map[NR_MEM_SECTIONS];

The size of this varies depending on architecture so the comment may be
misleading.  Maybe a comment like the following would be better?

/*
 * The portions of the mem_map used by SPARSEMEM are allocated in
 * batch and temporarily stored in this array. When sparse_init()
 * completes, the array is discarded
 */

I can see why you file-scoped it because its too large for the stack but
would it be better to allocate it from bootmem instead? It is
available by the time you need to use the array.

-- 
Mel Gorman
Part-time Phd Student                          Linux Technology Center
University of Limerick                         IBM Dublin Software Lab
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ