lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 1 Oct 2012 12:00:26 +0100
From:	Stefano Stabellini <stefano.stabellini@...citrix.com>
To:	Yinghai Lu <yinghai@...nel.org>
CC:	Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...e.hu>,
	"H. Peter Anvin" <hpa@...or.com>, Jacob Shin <jacob.shin@....com>,
	Tejun Heo <tj@...nel.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>
Subject: Re: [PATCH 04/13] x86, mm: Revert back good_end setting for 64bit

On Sun, 30 Sep 2012, Yinghai Lu wrote:
> After
> 
> | commit 8548c84da2f47e71bbbe300f55edb768492575f7
> | Author: Takashi Iwai <tiwai@...e.de>
> | Date:   Sun Oct 23 23:19:12 2011 +0200
> |
> |    x86: Fix S4 regression
> |
> |    Commit 4b239f458 ("x86-64, mm: Put early page table high") causes a S4
> |    regression since 2.6.39, namely the machine reboots occasionally at S4
> |    resume.  It doesn't happen always, overall rate is about 1/20.  But,
> |    like other bugs, once when this happens, it continues to happen.
> |
> |    This patch fixes the problem by essentially reverting the memory
> |    assignment in the older way.
> 
> Have some page table around 512M again, that will prevent kdump to find 512M
> under 768M.
> 
> We need revert that reverting, so we could put page table high again for 64bit.
> 
> Takashi agreed that S4 regression could be something else.
> 
> 	https://lkml.org/lkml/2012/6/15/182
> 
> Signed-off-by: Yinghai Lu <yinghai@...nel.org>
> ---
>  arch/x86/mm/init.c |    2 +-
>  1 files changed, 1 insertions(+), 1 deletions(-)
> 
> diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
> index 9f69180..aadb154 100644
> --- a/arch/x86/mm/init.c
> +++ b/arch/x86/mm/init.c
> @@ -76,8 +76,8 @@ static void __init find_early_table_space(struct map_range *mr,
>  #ifdef CONFIG_X86_32
>  	/* for fixmap */
>  	tables += roundup(__end_of_fixed_addresses * sizeof(pte_t), PAGE_SIZE);
> -#endif
>  	good_end = max_pfn_mapped << PAGE_SHIFT;
> +#endif
>  
>  	base = memblock_find_in_range(start, good_end, tables, PAGE_SIZE);
>  	if (!base)

Isn't this going to cause init_memory_mapping to allocate pagetable
pages from memory not yet mapped?
Last time I spoke with HPA and Thomas about this, they seem to agree
that it isn't a very good idea.
Also, it is proven to cause a certain amount of headaches on Xen,
see commit d8aa5ec3382e6a545b8f25178d1e0992d4927f19.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ