lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e378e60a-319b-e9aa-7e30-3e280c4431da@redhat.com>
Date:   Fri, 24 Sep 2021 14:42:22 +0200
From:   David Hildenbrand <david@...hat.com>
To:     Uladzislau Rezki <urezki@...il.com>
Cc:     LKML <linux-kernel@...r.kernel.org>, Ping Fang <pifang@...hat.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Roman Gushchin <guro@...com>, Michal Hocko <mhocko@...e.com>,
        Oscar Salvador <osalvador@...e.de>,
        Linux Memory Management List <linux-mm@...ck.org>
Subject: Re: [PATCH v1] mm/vmalloc: fix exact allocations with an alignment >
 1

On 22.09.21 12:41, Uladzislau Rezki wrote:
> On Wed, Sep 22, 2021 at 10:34:55AM +0200, David Hildenbrand wrote:
>>>> No, that's leaking implementation details to the caller. And no, increasing
>>>> the range and eventually allocating something bigger (e.g., placing a huge
>>>> page where it might not have been possible) is not acceptable for KASAN.
>>>>
>>>> If you're terribly unhappy with this patch,
>>> Sorry to say but it simple does not make sense.
>>>
>>
>> Let's agree to disagree.
>>
>> find_vmap_lowest_match() is imprecise now and that's an issue for exact
>> allocations. We can either make it fully precise again (eventually degrading
>> allocation performance) or just special-case exact allocations to fix the
>> regression.
>>
>> I decided to go the easy path and do the latter; I do agree that making
>> find_vmap_lowest_match() fully precise again might be preferred -- we could
>> have other allocations failing right now although there are still suitable
>> holes.
>>
>> I briefly thought about performing the search in find_vmap_lowest_match()
>> twice. First, start the search without an extended range, and fallback to
>> the extended range if that search fails. Unfortunately, I think that still
>> won't make the function completely precise due to the way we might miss
>> searching some suitable subtrees.
>>
>>>>
>>>> please suggest something reasonable to fix exact allocations:
>>>> a) Fixes the KASAN use case.
>>>> b) Allows for automatic placement of huge pages for exact allocations.
>>>> c) Doesn't leak implementation details into the caller.
>>>>
>>> I am looking at it.
>>
> I am testing this:
> 
> <snip>
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index dcf23d16a308..cdf3bda6313d 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -1161,18 +1161,14 @@ find_vmap_lowest_match(unsigned long size,
>   {
>   	struct vmap_area *va;
>   	struct rb_node *node;
> -	unsigned long length;
>   
>   	/* Start from the root. */
>   	node = free_vmap_area_root.rb_node;
>   
> -	/* Adjust the search size for alignment overhead. */
> -	length = size + align - 1;
> -
>   	while (node) {
>   		va = rb_entry(node, struct vmap_area, rb_node);
>   
> -		if (get_subtree_max_size(node->rb_left) >= length &&
> +		if (get_subtree_max_size(node->rb_left) >= size &&
>   				vstart < va->va_start) {
>   			node = node->rb_left;
>   		} else {
> @@ -1182,9 +1178,9 @@ find_vmap_lowest_match(unsigned long size,
>   			/*
>   			 * Does not make sense to go deeper towards the right
>   			 * sub-tree if it does not have a free block that is
> -			 * equal or bigger to the requested search length.
> +			 * equal or bigger to the requested search size.
>   			 */
> -			if (get_subtree_max_size(node->rb_right) >= length) {
> +			if (get_subtree_max_size(node->rb_right) >= size) {
>   				node = node->rb_right;
>   				continue;
>   			}
> @@ -1192,16 +1188,30 @@ find_vmap_lowest_match(unsigned long size,
>   			/*
>   			 * OK. We roll back and find the first right sub-tree,
>   			 * that will satisfy the search criteria. It can happen
> -			 * only once due to "vstart" restriction.
> +			 * due to "vstart" restriction or an alignment overhead.
>   			 */
>   			while ((node = rb_parent(node))) {
>   				va = rb_entry(node, struct vmap_area, rb_node);
>   				if (is_within_this_va(va, size, align, vstart))
>   					return va;
>   
> -				if (get_subtree_max_size(node->rb_right) >= length &&
> +				if (get_subtree_max_size(node->rb_right) >= size &&
>   						vstart <= va->va_start) {
> +					/*
> +					 * Shift the vstart forward, so we do not loop over same
> +					 * sub-tree force and back. The aim is to continue tree
> +					 * scanning toward higher addresses cutting off previous
> +					 * ones.
> +					 *
> +					 * Please note we update vstart with parent's start address
> +					 * adding "1" because we do not want to enter same sub-tree
> +					 * one more time after it has already been inspected and no
> +					 * suitable free block found there.
> +					 */
> +					vstart = va->va_start + 1;
>   					node = node->rb_right;
> +
> +					/* Scan a sub-tree rooted at "node". */

So the idea is that once we run into a dead end because we took a left 
subtree, we rollback to the next possible rigth subtree and try again. 
If we run into another dead end, we repeat ... thus, this can now happen 
more than once.

I assume the only implication is that this can now be slower in some 
corner cases with larger alignment, because it might take longer to find 
something suitable. Fair enough.

>   					break;
>   				}
>   			}
> <snip>
> 
> so it handles any alignment and is accurate when it comes to searching the most
> lowest free block when user wants to allocate with a special alignment value.
> 
> Could you please help and test the KASAN use case?

Just tried it, works just fine with KASAN and makes sense in general, 
thanks!

-- 
Thanks,

David / dhildenb

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ