[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20121022142143.GC14193@konrad-lan.dumpdata.com>
Date: Mon, 22 Oct 2012 10:21:44 -0400
From: Konrad Rzeszutek Wilk <konrad@...nel.org>
To: Yinghai Lu <yinghai@...nel.org>
Cc: Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...e.hu>,
"H. Peter Anvin" <hpa@...or.com>, Jacob Shin <jacob.shin@....com>,
Tejun Heo <tj@...nel.org>,
Stefano Stabellini <stefano.stabellini@...citrix.com>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 02/19] x86, mm: Use big page size for small memory range
On Thu, Oct 18, 2012 at 01:50:12PM -0700, Yinghai Lu wrote:
> We could map small range in the middle of big range at first, so should use
> big page size at first to avoid using small page size to break down page table.
>
> Only can set big page bit when that range has ram area around it.
The code looks good.
I would alter the description to say:
(Describe the problem)
"We are wasting entries in the page-table b/c are not taking advantage
of the fact that adjoining ranges could be of the same type and
coalescing them together. Instead we end up using the small size type."
(Explain your patch).
"We fix this by iterating over the ranges, detecting whether the
ranges that are next to each other are of the same type - and if so
set them to our type."
>
> -v2: fix 32bit boundary checking. We can not count ram above max_low_pfn
> for 32 bit.
>
> Signed-off-by: Yinghai Lu <yinghai@...nel.org>
> ---
> arch/x86/mm/init.c | 37 +++++++++++++++++++++++++++++++++++++
> 1 files changed, 37 insertions(+), 0 deletions(-)
>
> diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
> index c12dfd5..09ce38f 100644
> --- a/arch/x86/mm/init.c
> +++ b/arch/x86/mm/init.c
> @@ -88,6 +88,40 @@ static int __meminit save_mr(struct map_range *mr, int nr_range,
> return nr_range;
> }
>
> +/*
> + * adjust the page_size_mask for small range to go with
> + * big page size instead small one if nearby are ram too.
> + */
> +static void __init_refok adjust_range_page_size_mask(struct map_range *mr,
> + int nr_range)
> +{
> + int i;
> +
> + for (i = 0; i < nr_range; i++) {
> + if ((page_size_mask & (1<<PG_LEVEL_2M)) &&
> + !(mr[i].page_size_mask & (1<<PG_LEVEL_2M))) {
> + unsigned long start = round_down(mr[i].start, PMD_SIZE);
> + unsigned long end = round_up(mr[i].end, PMD_SIZE);
> +
> +#ifdef CONFIG_X86_32
> + if ((end >> PAGE_SHIFT) > max_low_pfn)
> + continue;
> +#endif
> +
> + if (memblock_is_region_memory(start, end - start))
> + mr[i].page_size_mask |= 1<<PG_LEVEL_2M;
> + }
> + if ((page_size_mask & (1<<PG_LEVEL_1G)) &&
> + !(mr[i].page_size_mask & (1<<PG_LEVEL_1G))) {
> + unsigned long start = round_down(mr[i].start, PUD_SIZE);
> + unsigned long end = round_up(mr[i].end, PUD_SIZE);
> +
> + if (memblock_is_region_memory(start, end - start))
> + mr[i].page_size_mask |= 1<<PG_LEVEL_1G;
> + }
> + }
> +}
> +
> static int __meminit split_mem_range(struct map_range *mr, int nr_range,
> unsigned long start,
> unsigned long end)
> @@ -182,6 +216,9 @@ static int __meminit split_mem_range(struct map_range *mr, int nr_range,
> nr_range--;
> }
>
> + if (!after_bootmem)
> + adjust_range_page_size_mask(mr, nr_range);
> +
> for (i = 0; i < nr_range; i++)
> printk(KERN_DEBUG " [mem %#010lx-%#010lx] page %s\n",
> mr[i].start, mr[i].end - 1,
> --
> 1.7.7
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists