lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191007080209.GA22997@pc636>
Date:   Mon, 7 Oct 2019 10:02:09 +0200
From:   Uladzislau Rezki <urezki@...il.com>
To:     Daniel Axtens <dja@...ens.net>
Cc:     kasan-dev@...glegroups.com, linux-mm@...ck.org, x86@...nel.org,
        aryabinin@...tuozzo.com, glider@...gle.com, luto@...nel.org,
        linux-kernel@...r.kernel.org, mark.rutland@....com,
        dvyukov@...gle.com, christophe.leroy@....fr,
        linuxppc-dev@...ts.ozlabs.org, gor@...ux.ibm.com
Subject: Re: [PATCH v8 1/5] kasan: support backing vmalloc space with real
 shadow memory

> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index a3c70e275f4e..9fb7a16f42ae 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -690,8 +690,19 @@ merge_or_add_vmap_area(struct vmap_area *va,
>  	struct list_head *next;
>  	struct rb_node **link;
>  	struct rb_node *parent;
> +	unsigned long orig_start, orig_end;
>  	bool merged = false;
>  
> +	/*
> +	 * To manage KASAN vmalloc memory usage, we use this opportunity to
> +	 * clean up the shadow memory allocated to back this allocation.
> +	 * Because a vmalloc shadow page covers several pages, the start or end
> +	 * of an allocation might not align with a shadow page. Use the merging
> +	 * opportunities to try to extend the region we can release.
> +	 */
> +	orig_start = va->va_start;
> +	orig_end = va->va_end;
> +
>  	/*
>  	 * Find a place in the tree where VA potentially will be
>  	 * inserted, unless it is merged with its sibling/siblings.
> @@ -741,6 +752,10 @@ merge_or_add_vmap_area(struct vmap_area *va,
>  		if (sibling->va_end == va->va_start) {
>  			sibling->va_end = va->va_end;
>  
> +			kasan_release_vmalloc(orig_start, orig_end,
> +					      sibling->va_start,
> +					      sibling->va_end);
> +
>  			/* Check and update the tree if needed. */
>  			augment_tree_propagate_from(sibling);
>  
> @@ -754,6 +769,8 @@ merge_or_add_vmap_area(struct vmap_area *va,
>  	}
>  
>  insert:
> +	kasan_release_vmalloc(orig_start, orig_end, va->va_start, va->va_end);
> +
>  	if (!merged) {
>  		link_va(va, root, parent, link, head);
>  		augment_tree_propagate_from(va);
Hello, Daniel.

Looking at it one more, i think above part of code is a bit wrong
and should be separated from merge_or_add_vmap_area() logic. The
reason is to keep it simple and do only what it is supposed to do:
merging or adding.

Also the kasan_release_vmalloc() gets called twice there and looks like
a duplication. Apart of that, merge_or_add_vmap_area() can be called via
recovery path when vmap/vmaps is/are not even setup. See percpu
allocator.

I guess your part could be moved directly to the __purge_vmap_area_lazy()
where all vmaps are lazily freed. To do so, we also need to modify
merge_or_add_vmap_area() to return merged area:

<snip>
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index e92ff5f7dd8b..fecde4312d68 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -683,7 +683,7 @@ insert_vmap_area_augment(struct vmap_area *va,
  * free area is inserted. If VA has been merged, it is
  * freed.
  */
-static __always_inline void
+static __always_inline struct vmap_area *
 merge_or_add_vmap_area(struct vmap_area *va,
        struct rb_root *root, struct list_head *head)
 {
@@ -750,7 +750,10 @@ merge_or_add_vmap_area(struct vmap_area *va,
 
                        /* Free vmap_area object. */
                        kmem_cache_free(vmap_area_cachep, va);
-                       return;
+
+                       /* Point to the new merged area. */
+                       va = sibling;
+                       merged = true;
                }
        }
 
@@ -759,6 +762,8 @@ merge_or_add_vmap_area(struct vmap_area *va,
                link_va(va, root, parent, link, head);
                augment_tree_propagate_from(va);
        }
+
+       return va;
 }
 
 static __always_inline bool
@@ -1172,7 +1177,7 @@ static void __free_vmap_area(struct vmap_area *va)
        /*
         * Merge VA with its neighbors, otherwise just add it.
         */
-       merge_or_add_vmap_area(va,
+       (void) merge_or_add_vmap_area(va,
                &free_vmap_area_root, &free_vmap_area_list);
 }
 
@@ -1279,15 +1284,20 @@ static bool __purge_vmap_area_lazy(unsigned long start, unsigned long end)
        spin_lock(&vmap_area_lock);
        llist_for_each_entry_safe(va, n_va, valist, purge_list) {
                unsigned long nr = (va->va_end - va->va_start) >> PAGE_SHIFT;
+               unsigned long orig_start = va->va_start;
+               unsigned long orig_end = va->va_end;
 
                /*
                 * Finally insert or merge lazily-freed area. It is
                 * detached and there is no need to "unlink" it from
                 * anything.
                 */
-               merge_or_add_vmap_area(va,
+               va = merge_or_add_vmap_area(va,
                        &free_vmap_area_root, &free_vmap_area_list);
 
+               kasan_release_vmalloc(orig_start,
+                       orig_end, va->va_start, va->va_end);
+
                atomic_long_sub(nr, &vmap_lazy_nr);
 
                if (atomic_long_read(&vmap_lazy_nr) < resched_threshold)
<snip>

--
Vlad Rezki

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ