[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f847fc8c-f875-8d93-9d49-8f03d4c6303a@virtuozzo.com>
Date: Tue, 29 Oct 2019 19:42:57 +0300
From: Andrey Ryabinin <aryabinin@...tuozzo.com>
To: Daniel Axtens <dja@...ens.net>, kasan-dev@...glegroups.com,
linux-mm@...ck.org, x86@...nel.org, glider@...gle.com,
luto@...nel.org, linux-kernel@...r.kernel.org,
mark.rutland@....com, dvyukov@...gle.com, christophe.leroy@....fr
Cc: linuxppc-dev@...ts.ozlabs.org, gor@...ux.ibm.com,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH v10 1/5] kasan: support backing vmalloc space with real
shadow memory
On 10/29/19 7:20 AM, Daniel Axtens wrote:
> Hook into vmalloc and vmap, and dynamically allocate real shadow
> memory to back the mappings.
>
> Most mappings in vmalloc space are small, requiring less than a full
> page of shadow space. Allocating a full shadow page per mapping would
> therefore be wasteful. Furthermore, to ensure that different mappings
> use different shadow pages, mappings would have to be aligned to
> KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE.
>
> Instead, share backing space across multiple mappings. Allocate a
> backing page when a mapping in vmalloc space uses a particular page of
> the shadow region. This page can be shared by other vmalloc mappings
> later on.
>
> We hook in to the vmap infrastructure to lazily clean up unused shadow
> memory.
>
> To avoid the difficulties around swapping mappings around, this code
> expects that the part of the shadow region that covers the vmalloc
> space will not be covered by the early shadow page, but will be left
> unmapped. This will require changes in arch-specific code.
>
> This allows KASAN with VMAP_STACK, and may be helpful for architectures
> that do not have a separate module space (e.g. powerpc64, which I am
> currently working on). It also allows relaxing the module alignment
> back to PAGE_SIZE.
>
> Link: https://bugzilla.kernel.org/show_bug.cgi?id=202009
> Acked-by: Vasily Gorbik <gor@...ux.ibm.com>
> Co-developed-by: Mark Rutland <mark.rutland@....com>
> Signed-off-by: Mark Rutland <mark.rutland@....com> [shadow rework]
> Signed-off-by: Daniel Axtens <dja@...ens.net>
Small nit bellow, otherwise looks fine:
Reviewed-by: Andrey Ryabinin <aryabinin@...tuozzo.com>
> static __always_inline bool
> @@ -1196,8 +1201,8 @@ static void free_vmap_area(struct vmap_area *va)
> * Insert/Merge it back to the free tree/list.
> */
> spin_lock(&free_vmap_area_lock);
> - merge_or_add_vmap_area(va,
> - &free_vmap_area_root, &free_vmap_area_list);
> + (void)merge_or_add_vmap_area(va, &free_vmap_area_root,
> + &free_vmap_area_list);
> spin_unlock(&free_vmap_area_lock);
> }
>
..
>
> @@ -3391,8 +3428,8 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets,
> * and when pcpu_get_vm_areas() is success.
> */
> while (area--) {
> - merge_or_add_vmap_area(vas[area],
> - &free_vmap_area_root, &free_vmap_area_list);
> + (void)merge_or_add_vmap_area(vas[area], &free_vmap_area_root,
I don't think these (void) casts are necessary.
> + &free_vmap_area_list);
> vas[area] = NULL;
> }
>
>
Powered by blists - more mailing lists