lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20171003144845.GD4931@leverpostej>
Date:   Tue, 3 Oct 2017 15:48:46 +0100
From:   Mark Rutland <mark.rutland@....com>
To:     Pavel Tatashin <pasha.tatashin@...cle.com>, will.deacon@....com,
        catalin.marinas@....com
Cc:     linux-kernel@...r.kernel.org, sparclinux@...r.kernel.org,
        linux-mm@...ck.org, linuxppc-dev@...ts.ozlabs.org,
        linux-s390@...r.kernel.org, linux-arm-kernel@...ts.infradead.org,
        x86@...nel.org, kasan-dev@...glegroups.com, borntraeger@...ibm.com,
        heiko.carstens@...ibm.com, davem@...emloft.net,
        willy@...radead.org, mhocko@...nel.org, ard.biesheuvel@...aro.org,
        sam@...nborg.org, mgorman@...hsingularity.net,
        steven.sistare@...cle.com, daniel.m.jordan@...cle.com,
        bob.picco@...cle.com
Subject: Re: [PATCH v9 09/12] mm/kasan: kasan specific map populate function

Hi Pavel,

On Wed, Sep 20, 2017 at 04:17:11PM -0400, Pavel Tatashin wrote:
> During early boot, kasan uses vmemmap_populate() to establish its shadow
> memory. But, that interface is intended for struct pages use.
> 
> Because of the current project, vmemmap won't be zeroed during allocation,
> but kasan expects that memory to be zeroed. We are adding a new
> kasan_map_populate() function to resolve this difference.

Thanks for putting this together.

I've given this a spin on arm64, and can confirm that it works.

Given that this involes redundant walking of page tables, I still think
it'd be preferable to have some common *_populate() helper that took a
gfp argument, but I guess it's not the end of the world.

I'll leave it to Will and Catalin to say whether they're happy with the
page table walking and the new p{u,m}d_large() helpers added to arm64.

Thanks,
Mark.

> 
> Signed-off-by: Pavel Tatashin <pasha.tatashin@...cle.com>
> ---
>  arch/arm64/include/asm/pgtable.h |  3 ++
>  include/linux/kasan.h            |  2 ++
>  mm/kasan/kasan_init.c            | 67 ++++++++++++++++++++++++++++++++++++++++
>  3 files changed, 72 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
> index bc4e92337d16..d89713f04354 100644
> --- a/arch/arm64/include/asm/pgtable.h
> +++ b/arch/arm64/include/asm/pgtable.h
> @@ -381,6 +381,9 @@ extern pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn,
>  				 PUD_TYPE_TABLE)
>  #endif
>  
> +#define pmd_large(pmd)		pmd_sect(pmd)
> +#define pud_large(pud)		pud_sect(pud)
> +
>  static inline void set_pmd(pmd_t *pmdp, pmd_t pmd)
>  {
>  	*pmdp = pmd;
> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> index a5c7046f26b4..7e13df1722c2 100644
> --- a/include/linux/kasan.h
> +++ b/include/linux/kasan.h
> @@ -78,6 +78,8 @@ size_t kasan_metadata_size(struct kmem_cache *cache);
>  
>  bool kasan_save_enable_multi_shot(void);
>  void kasan_restore_multi_shot(bool enabled);
> +int __meminit kasan_map_populate(unsigned long start, unsigned long end,
> +				 int node);
>  
>  #else /* CONFIG_KASAN */
>  
> diff --git a/mm/kasan/kasan_init.c b/mm/kasan/kasan_init.c
> index 554e4c0f23a2..57a973f05f63 100644
> --- a/mm/kasan/kasan_init.c
> +++ b/mm/kasan/kasan_init.c
> @@ -197,3 +197,70 @@ void __init kasan_populate_zero_shadow(const void *shadow_start,
>  		zero_p4d_populate(pgd, addr, next);
>  	} while (pgd++, addr = next, addr != end);
>  }
> +
> +/* Creates mappings for kasan during early boot. The mapped memory is zeroed */
> +int __meminit kasan_map_populate(unsigned long start, unsigned long end,
> +				 int node)
> +{
> +	unsigned long addr, pfn, next;
> +	unsigned long long size;
> +	pgd_t *pgd;
> +	p4d_t *p4d;
> +	pud_t *pud;
> +	pmd_t *pmd;
> +	pte_t *pte;
> +	int ret;
> +
> +	ret = vmemmap_populate(start, end, node);
> +	/*
> +	 * We might have partially populated memory, so check for no entries,
> +	 * and zero only those that actually exist.
> +	 */
> +	for (addr = start; addr < end; addr = next) {
> +		pgd = pgd_offset_k(addr);
> +		if (pgd_none(*pgd)) {
> +			next = pgd_addr_end(addr, end);
> +			continue;
> +		}
> +
> +		p4d = p4d_offset(pgd, addr);
> +		if (p4d_none(*p4d)) {
> +			next = p4d_addr_end(addr, end);
> +			continue;
> +		}
> +
> +		pud = pud_offset(p4d, addr);
> +		if (pud_none(*pud)) {
> +			next = pud_addr_end(addr, end);
> +			continue;
> +		}
> +		if (pud_large(*pud)) {
> +			/* This is PUD size page */
> +			next = pud_addr_end(addr, end);
> +			size = PUD_SIZE;
> +			pfn = pud_pfn(*pud);
> +		} else {
> +			pmd = pmd_offset(pud, addr);
> +			if (pmd_none(*pmd)) {
> +				next = pmd_addr_end(addr, end);
> +				continue;
> +			}
> +			if (pmd_large(*pmd)) {
> +				/* This is PMD size page */
> +				next = pmd_addr_end(addr, end);
> +				size = PMD_SIZE;
> +				pfn = pmd_pfn(*pmd);
> +			} else {
> +				pte = pte_offset_kernel(pmd, addr);
> +				next = addr + PAGE_SIZE;
> +				if (pte_none(*pte))
> +					continue;
> +				/* This is base size page */
> +				size = PAGE_SIZE;
> +				pfn = pte_pfn(*pte);
> +			}
> +		}
> +		memset(phys_to_virt(PFN_PHYS(pfn)), 0, size);
> +	}
> +	return ret;
> +}
> -- 
> 2.14.1
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ