lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 25 Apr 2014 13:01:33 -0400
From:	Mark Salter <msalter@...hat.com>
To:	Marc Zyngier <marc.zyngier@....com>
Cc:	Christoffer Dall <christoffer.dall@...aro.org>,
	kvmarm@...ts.cs.columbia.edu,
	Russell King <linux@....linux.org.uk>,
	linux-kernel@...r.kernel.org, linux-arm-kernel@...ts.infradead.org,
	kvm@...r.kernel.org
Subject: Re: [PATCH] arm: KVM: fix possible misalignment of PGDs and bounce 
 page

Ping. Can we get this into 3.15rc as a bug fix? Anything I can help
with?

--Mark

On Sat, 2014-03-29 at 14:01 +0000, Marc Zyngier wrote:
> On 2014-03-28 19:37, Christoffer Dall wrote:
> > On Fri, Mar 28, 2014 at 10:25:19AM -0400, Mark Salter wrote:
> >> The kvm/mmu code shared by arm and arm64 uses kalloc() to allocate
> >> a bounce page (if hypervisor init code crosses page boundary) and
> >> hypervisor PGDs. The problem is that kalloc() does not guarantee
> >> the proper alignment. In the case of the bounce page, the page sized
> >> buffer allocated may also cross a page boundary negating the purpose
> >> and leading to a hang during kvm initialization. Likewise the PGDs
> >> allocated may not meet the minimum alignment requirements of the
> >> underlying MMU. This patch uses __get_free_page() to guarantee the
> >> worst case alignment needs of the bounce page and PGDs on both arm
> >> and arm64.
> >>
> >> Signed-off-by: Mark Salter <msalter@...hat.com>
> >> ---
> >>  arch/arm/kvm/mmu.c | 15 +++++++++------
> >>  1 file changed, 9 insertions(+), 6 deletions(-)
> >>
> >> diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
> >> index 7789857..575d790 100644
> >> --- a/arch/arm/kvm/mmu.c
> >> +++ b/arch/arm/kvm/mmu.c
> >> @@ -42,6 +42,8 @@ static unsigned long hyp_idmap_start;
> >>  static unsigned long hyp_idmap_end;
> >>  static phys_addr_t hyp_idmap_vector;
> >>
> >> +#define pgd_order get_order(PTRS_PER_PGD * sizeof(pgd_t))
> >> +
> >>  #define kvm_pmd_huge(_x)	(pmd_huge(_x) || pmd_trans_huge(_x))
> >>
> >>  static void kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t 
> >> ipa)
> >> @@ -199,14 +201,14 @@ void free_boot_hyp_pgd(void)
> >>  	if (boot_hyp_pgd) {
> >>  		unmap_range(NULL, boot_hyp_pgd, hyp_idmap_start, PAGE_SIZE);
> >>  		unmap_range(NULL, boot_hyp_pgd, TRAMPOLINE_VA, PAGE_SIZE);
> >> -		kfree(boot_hyp_pgd);
> >> +		free_pages((unsigned long)boot_hyp_pgd, pgd_order);
> >>  		boot_hyp_pgd = NULL;
> >>  	}
> >>
> >>  	if (hyp_pgd)
> >>  		unmap_range(NULL, hyp_pgd, TRAMPOLINE_VA, PAGE_SIZE);
> >>
> >> -	kfree(init_bounce_page);
> >> +	free_page((unsigned long)init_bounce_page);
> >>  	init_bounce_page = NULL;
> >>
> >>  	mutex_unlock(&kvm_hyp_pgd_mutex);
> >> @@ -236,7 +238,7 @@ void free_hyp_pgds(void)
> >>  		for (addr = VMALLOC_START; is_vmalloc_addr((void*)addr); addr += 
> >> PGDIR_SIZE)
> >>  			unmap_range(NULL, hyp_pgd, KERN_TO_HYP(addr), PGDIR_SIZE);
> >>
> >> -		kfree(hyp_pgd);
> >> +		free_pages((unsigned long)hyp_pgd, pgd_order);
> >>  		hyp_pgd = NULL;
> >>  	}
> >>
> >> @@ -930,7 +932,7 @@ int kvm_mmu_init(void)
> >>  		size_t len = __hyp_idmap_text_end - __hyp_idmap_text_start;
> >>  		phys_addr_t phys_base;
> >>
> >> -		init_bounce_page = kmalloc(PAGE_SIZE, GFP_KERNEL);
> >> +		init_bounce_page = (void *)__get_free_page(GFP_KERNEL);
> >>  		if (!init_bounce_page) {
> >>  			kvm_err("Couldn't allocate HYP init bounce page\n");
> >>  			err = -ENOMEM;
> >> @@ -956,8 +958,9 @@ int kvm_mmu_init(void)
> >>  			 (unsigned long)phys_base);
> >>  	}
> >>
> >> -	hyp_pgd = kzalloc(PTRS_PER_PGD * sizeof(pgd_t), GFP_KERNEL);
> >> -	boot_hyp_pgd = kzalloc(PTRS_PER_PGD * sizeof(pgd_t), GFP_KERNEL);
> >> +	hyp_pgd = (pgd_t *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, 
> >> pgd_order);
> >> +	boot_hyp_pgd = (pgd_t *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, 
> >> pgd_order);
> >> +
> >>  	if (!hyp_pgd || !boot_hyp_pgd) {
> >>  		kvm_err("Hyp mode PGD not allocated\n");
> >>  		err = -ENOMEM;
> >> --
> >> 1.8.5.3
> >>
> > This looks right to me.  Funnily enough I seem to remember a 
> > discussion
> > from when we originally merged this code where someone (maybe me) 
> > argued
> > that kmalloc() would align to the size of the allocation, but I don't
> > see anything backing this up at this point.
> >
> > So:
> >
> > Acked-by: Christoffer Dall <christoffer.dall@...aro.org>
> >
> > If Marc agrees I can queue this for -rc1.
> 
> Looks good to me indeed.
> 
> Acked-by: Marc Zyngier <marc.zyngier@....com>
> 
>          M.


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ