lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87pnkiu5ta.fsf@dja-thinkpad.axtens.net>
Date:   Tue, 03 Sep 2019 00:32:49 +1000
From:   Daniel Axtens <dja@...ens.net>
To:     Mark Rutland <mark.rutland@....com>
Cc:     kasan-dev@...glegroups.com, linux-mm@...ck.org, x86@...nel.org,
        aryabinin@...tuozzo.com, glider@...gle.com, luto@...nel.org,
        linux-kernel@...r.kernel.org, dvyukov@...gle.com,
        christophe.leroy@....fr, linuxppc-dev@...ts.ozlabs.org,
        gor@...ux.ibm.com
Subject: Re: [PATCH v6 1/5] kasan: support backing vmalloc space with real shadow memory

Hi Mark,

>> +static int kasan_depopulate_vmalloc_pte(pte_t *ptep, unsigned long addr,
>> +					void *unused)
>> +{
>> +	unsigned long page;
>> +
>> +	page = (unsigned long)__va(pte_pfn(*ptep) << PAGE_SHIFT);
>> +
>> +	spin_lock(&init_mm.page_table_lock);
>> +
>> +	if (likely(!pte_none(*ptep))) {
>> +		pte_clear(&init_mm, addr, ptep);
>> +		free_page(page);
>> +	}
>> +	spin_unlock(&init_mm.page_table_lock);
>> +
>> +	return 0;
>> +}
>
> There needs to be TLB maintenance after unmapping the page, but I don't
> see that happening below.
>
> We need that to ensure that errant accesses don't hit the page we're
> freeing and that new mappings at the same VA don't cause a TLB conflict
> or TLB amalgamation issue.

Darn it, I knew there was something I forgot to do! I thought of that
over the weekend, didn't write it down, and then forgot it when I went
to respin the patches. You're totally right.

>
>> +/*
>> + * Release the backing for the vmalloc region [start, end), which
>> + * lies within the free region [free_region_start, free_region_end).
>> + *
>> + * This can be run lazily, long after the region was freed. It runs
>> + * under vmap_area_lock, so it's not safe to interact with the vmalloc/vmap
>> + * infrastructure.
>> + */
>
> IIUC we aim to only free non-shared shadow by aligning the start
> upwards, and aligning the end downwards. I think it would be worth
> mentioning that explicitly in the comment since otherwise it's not
> obvious how we handle races between alloc/free.
>

Oh, I will need to think through that more carefully.

I think the vmap_area_lock protects us against alloc/free races. I think
alignment operates at least somewhat as you've described, and while it
is important for correctness, I'm not sure I'd say it prevented races? I
will double check my understanding of vmap_area_lock, and I agree the
comment needs to be much clearer.

Once again, thanks for your patience and thoughtful review.

Regards,
Daniel

> Thanks,
> Mark.
>
>> +void kasan_release_vmalloc(unsigned long start, unsigned long end,
>> +			   unsigned long free_region_start,
>> +			   unsigned long free_region_end)
>> +{
>> +	void *shadow_start, *shadow_end;
>> +	unsigned long region_start, region_end;
>> +
>> +	/* we start with shadow entirely covered by this region */
>> +	region_start = ALIGN(start, PAGE_SIZE * KASAN_SHADOW_SCALE_SIZE);
>> +	region_end = ALIGN_DOWN(end, PAGE_SIZE * KASAN_SHADOW_SCALE_SIZE);
>> +
>> +	/*
>> +	 * We don't want to extend the region we release to the entire free
>> +	 * region, as the free region might cover huge chunks of vmalloc space
>> +	 * where we never allocated anything. We just want to see if we can
>> +	 * extend the [start, end) range: if start or end fall part way through
>> +	 * a shadow page, we want to check if we can free that entire page.
>> +	 */
>> +
>> +	free_region_start = ALIGN(free_region_start,
>> +				  PAGE_SIZE * KASAN_SHADOW_SCALE_SIZE);
>> +
>> +	if (start != region_start &&
>> +	    free_region_start < region_start)
>> +		region_start -= PAGE_SIZE * KASAN_SHADOW_SCALE_SIZE;
>> +
>> +	free_region_end = ALIGN_DOWN(free_region_end,
>> +				     PAGE_SIZE * KASAN_SHADOW_SCALE_SIZE);
>> +
>> +	if (end != region_end &&
>> +	    free_region_end > region_end)
>> +		region_end += PAGE_SIZE * KASAN_SHADOW_SCALE_SIZE;
>> +
>> +	shadow_start = kasan_mem_to_shadow((void *)region_start);
>> +	shadow_end = kasan_mem_to_shadow((void *)region_end);
>> +
>> +	if (shadow_end > shadow_start)
>> +		apply_to_page_range(&init_mm, (unsigned long)shadow_start,
>> +				    (unsigned long)(shadow_end - shadow_start),
>> +				    kasan_depopulate_vmalloc_pte, NULL);
>> +}

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ