lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.02.1401051756090.8667@kaball.uk.xensource.com>
Date:	Sun, 5 Jan 2014 17:56:16 +0000
From:	Stefano Stabellini <stefano.stabellini@...citrix.com>
To:	Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>
CC:	<xen-devel@...ts.xenproject.org>, <linux-kernel@...r.kernel.org>,
	<boris.ostrovsky@...cle.com>, <stefano.stabellini@...citrix.com>,
	<david.vrabel@...rix.com>, <hpa@...or.com>
Subject: Re: [PATCH v13 06/19] xen/mmu: Cleanup xen_pagetable_p2m_copy a
 bit.

On Fri, 3 Jan 2014, Konrad Rzeszutek Wilk wrote:
> Stefano noticed that the code runs only under 64-bit so
> the comments about 32-bit are pointless.
> 
> Also we change the condition for xen_revector_p2m_tree
> returning the same value (because it could not allocate
> a swath of space to put the new P2M in) or it had been
> called once already. In such we return early from the
> function.
> 
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>

Acked-by: Stefano Stabellini <stefano.stabellini@...citrix.com>


>  arch/x86/xen/mmu.c | 40 ++++++++++++++++++++--------------------
>  1 file changed, 20 insertions(+), 20 deletions(-)
> 
> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> index c140eff..9d74249 100644
> --- a/arch/x86/xen/mmu.c
> +++ b/arch/x86/xen/mmu.c
> @@ -1209,29 +1209,29 @@ static void __init xen_pagetable_p2m_copy(void)
>  
>  	size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
>  
> -	/* On 32-bit, we get zero so this never gets executed. */
>  	new_mfn_list = xen_revector_p2m_tree();
> -	if (new_mfn_list && new_mfn_list != xen_start_info->mfn_list) {
> -		/* using __ka address and sticking INVALID_P2M_ENTRY! */
> -		memset((void *)xen_start_info->mfn_list, 0xff, size);
> -
> -		/* We should be in __ka space. */
> -		BUG_ON(xen_start_info->mfn_list < __START_KERNEL_map);
> -		addr = xen_start_info->mfn_list;
> -		/* We roundup to the PMD, which means that if anybody at this stage is
> -		 * using the __ka address of xen_start_info or xen_start_info->shared_info
> -		 * they are in going to crash. Fortunatly we have already revectored
> -		 * in xen_setup_kernel_pagetable and in xen_setup_shared_info. */
> -		size = roundup(size, PMD_SIZE);
> -		xen_cleanhighmap(addr, addr + size);
> -
> -		size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
> -		memblock_free(__pa(xen_start_info->mfn_list), size);
> -		/* And revector! Bye bye old array */
> -		xen_start_info->mfn_list = new_mfn_list;
> -	} else
> +	/* No memory or already called. */
> +	if (!new_mfn_list || new_mfn_list == xen_start_info->mfn_list)
>  		return;
>  
> +	/* using __ka address and sticking INVALID_P2M_ENTRY! */
> +	memset((void *)xen_start_info->mfn_list, 0xff, size);
> +
> +	/* We should be in __ka space. */
> +	BUG_ON(xen_start_info->mfn_list < __START_KERNEL_map);
> +	addr = xen_start_info->mfn_list;
> +	/* We roundup to the PMD, which means that if anybody at this stage is
> +	 * using the __ka address of xen_start_info or xen_start_info->shared_info
> +	 * they are in going to crash. Fortunatly we have already revectored
> +	 * in xen_setup_kernel_pagetable and in xen_setup_shared_info. */
> +	size = roundup(size, PMD_SIZE);
> +	xen_cleanhighmap(addr, addr + size);
> +
> +	size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
> +	memblock_free(__pa(xen_start_info->mfn_list), size);
> +	/* And revector! Bye bye old array */
> +	xen_start_info->mfn_list = new_mfn_list;
> +
>  	/* At this stage, cleanup_highmap has already cleaned __ka space
>  	 * from _brk_limit way up to the max_pfn_mapped (which is the end of
>  	 * the ramdisk). We continue on, erasing PMD entries that point to page
> -- 
> 1.8.3.1
> 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ