[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87sgnzuak0.fsf@dja-thinkpad.axtens.net>
Date: Fri, 11 Oct 2019 16:15:59 +1100
From: Daniel Axtens <dja@...ens.net>
To: Uladzislau Rezki <urezki@...il.com>
Cc: kasan-dev@...glegroups.com, linux-mm@...ck.org, x86@...nel.org,
aryabinin@...tuozzo.com, glider@...gle.com, luto@...nel.org,
linux-kernel@...r.kernel.org, mark.rutland@....com,
dvyukov@...gle.com, christophe.leroy@....fr,
linuxppc-dev@...ts.ozlabs.org, gor@...ux.ibm.com
Subject: Re: [PATCH v8 1/5] kasan: support backing vmalloc space with real shadow memory
Hi Uladzislau,
> Looking at it one more, i think above part of code is a bit wrong
> and should be separated from merge_or_add_vmap_area() logic. The
> reason is to keep it simple and do only what it is supposed to do:
> merging or adding.
>
> Also the kasan_release_vmalloc() gets called twice there and looks like
> a duplication. Apart of that, merge_or_add_vmap_area() can be called via
> recovery path when vmap/vmaps is/are not even setup. See percpu
> allocator.
>
> I guess your part could be moved directly to the __purge_vmap_area_lazy()
> where all vmaps are lazily freed. To do so, we also need to modify
> merge_or_add_vmap_area() to return merged area:
Thanks for the review. I've integrated your snippet - it seems to work
fine, and I agree that it is much simpler and clearer. so I've rolled it
in to v9 which I will post soon.
Regards,
Daniel
>
> <snip>
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index e92ff5f7dd8b..fecde4312d68 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -683,7 +683,7 @@ insert_vmap_area_augment(struct vmap_area *va,
> * free area is inserted. If VA has been merged, it is
> * freed.
> */
> -static __always_inline void
> +static __always_inline struct vmap_area *
> merge_or_add_vmap_area(struct vmap_area *va,
> struct rb_root *root, struct list_head *head)
> {
> @@ -750,7 +750,10 @@ merge_or_add_vmap_area(struct vmap_area *va,
>
> /* Free vmap_area object. */
> kmem_cache_free(vmap_area_cachep, va);
> - return;
> +
> + /* Point to the new merged area. */
> + va = sibling;
> + merged = true;
> }
> }
>
> @@ -759,6 +762,8 @@ merge_or_add_vmap_area(struct vmap_area *va,
> link_va(va, root, parent, link, head);
> augment_tree_propagate_from(va);
> }
> +
> + return va;
> }
>
> static __always_inline bool
> @@ -1172,7 +1177,7 @@ static void __free_vmap_area(struct vmap_area *va)
> /*
> * Merge VA with its neighbors, otherwise just add it.
> */
> - merge_or_add_vmap_area(va,
> + (void) merge_or_add_vmap_area(va,
> &free_vmap_area_root, &free_vmap_area_list);
> }
>
> @@ -1279,15 +1284,20 @@ static bool __purge_vmap_area_lazy(unsigned long start, unsigned long end)
> spin_lock(&vmap_area_lock);
> llist_for_each_entry_safe(va, n_va, valist, purge_list) {
> unsigned long nr = (va->va_end - va->va_start) >> PAGE_SHIFT;
> + unsigned long orig_start = va->va_start;
> + unsigned long orig_end = va->va_end;
>
> /*
> * Finally insert or merge lazily-freed area. It is
> * detached and there is no need to "unlink" it from
> * anything.
> */
> - merge_or_add_vmap_area(va,
> + va = merge_or_add_vmap_area(va,
> &free_vmap_area_root, &free_vmap_area_list);
>
> + kasan_release_vmalloc(orig_start,
> + orig_end, va->va_start, va->va_end);
> +
> atomic_long_sub(nr, &vmap_lazy_nr);
>
> if (atomic_long_read(&vmap_lazy_nr) < resched_threshold)
> <snip>
>
> --
> Vlad Rezki
Powered by blists - more mailing lists