[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160118041440.GA415@swordfish>
Date: Mon, 18 Jan 2016 13:14:40 +0900
From: Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>
To: Junil Lee <junil0814.lee@....com>
Cc: minchan@...nel.org, ngupta@...are.org,
sergey.senozhatsky.work@...il.com, akpm@...ux-foundation.org,
linux-mm@...ck.org, linux-kernel@...r.kernel.org,
sergey.senozhatsky@...il.com, Vlastimil Babka <vbabka@...e.cz>
Subject: Re: [PATCH v3] zsmalloc: fix migrate_zspage-zs_free race condition
Cc Vlastimil,
Hello,
On (01/18/16 10:15), Junil Lee wrote:
> To prevent unlock at the not correct situation, tagging the new obj to
> assure lock in migrate_zspage() before right unlock path.
>
> Two functions are in race condition by tag which set 1 on last bit of
> obj, however unlock succrently when update new obj to handle before call
> unpin_tag() which is right unlock path.
>
> summarize this problem by call flow as below:
>
> CPU0 CPU1
> migrate_zspage
> find_alloced_obj()
> trypin_tag() -- obj |= HANDLE_PIN_BIT
> obj_malloc() -- new obj is not set zs_free
> record_obj() -- unlock and break sync pin_tag() -- get lock
> unpin_tag()
Junil, can something like this be a bit simpler problem description?
---
record_obj() in migrate_zspage() does not preserve handle's
HANDLE_PIN_BIT, set by find_alloced_obj()->trypin_tag(), and
implicitly (accidentally) un-pins the handle, while migrate_zspage()
still performs an explicit unpin_tag() on the that handle.
This additional explicit unpin_tag() introduces a race condition
with zs_free(), which can pin that handle by this time, so the handle
becomes un-pinned. Schematically, it goes like this:
CPU0 CPU1
migrate_zspage
find_alloced_obj
trypin_tag
set HANDLE_PIN_BIT zs_free()
pin_tag()
obj_malloc() -- new object, no tag
record_obj() -- remove HANDLE_PIN_BIT set HANDLE_PIN_BIT
unpin_tag() -- remove zs_free's HANDLE_PIN_BIT
The race condition may result in a NULL pointer dereference:
Unable to handle kernel NULL pointer dereference at virtual address 00000000
CPU: 0 PID: 19001 Comm: CookieMonsterCl Tainted:
PC is at get_zspage_mapping+0x0/0x24
LR is at obj_free.isra.22+0x64/0x128
Call trace:
[<ffffffc0001a3aa8>] get_zspage_mapping+0x0/0x24
[<ffffffc0001a4918>] zs_free+0x88/0x114
[<ffffffc00053ae54>] zram_free_page+0x64/0xcc
[<ffffffc00053af4c>] zram_slot_free_notify+0x90/0x108
[<ffffffc000196638>] swap_entry_free+0x278/0x294
[<ffffffc000199008>] free_swap_and_cache+0x38/0x11c
[<ffffffc0001837ac>] unmap_single_vma+0x480/0x5c8
[<ffffffc000184350>] unmap_vmas+0x44/0x60
[<ffffffc00018a53c>] exit_mmap+0x50/0x110
[<ffffffc00009e408>] mmput+0x58/0xe0
[<ffffffc0000a2854>] do_exit+0x320/0x8dc
[<ffffffc0000a3cb4>] do_group_exit+0x44/0xa8
[<ffffffc0000ae1bc>] get_signal+0x538/0x580
[<ffffffc000087e44>] do_signal+0x98/0x4b8
[<ffffffc00008843c>] do_notify_resume+0x14/0x5c
Fix the race by removing explicit unpin_tag() from migrate_zspage().
---
> and for test, print obj value after pin_tag() in zs_free().
> Sometimes obj is even number means break synchronization.
>
> After patched, crash is not occurred and obj is only odd number in same
> situation.
>
> Signed-off-by: Junil Lee <junil0814.lee@....com>
I believe Vlastimil deserves a credit here (at least Suggested-by)
Suggested-by: Vlastimil Babka <vbabka@...e.cz>
now, can the compiler re-order
record_obj(handle, free_obj);
obj_free(pool, class, used_obj);
?
-ss
> ---
> mm/zsmalloc.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> index e7414ce..0acfa20 100644
> --- a/mm/zsmalloc.c
> +++ b/mm/zsmalloc.c
> @@ -1635,8 +1635,8 @@ static int migrate_zspage(struct zs_pool *pool, struct size_class *class,
> free_obj = obj_malloc(d_page, class, handle);
> zs_object_copy(free_obj, used_obj, class);
> index++;
> + /* This also effectively unpins the handle */
> record_obj(handle, free_obj);
> - unpin_tag(handle);
> obj_free(pool, class, used_obj);
> }
>
> --
> 2.6.2
>
Powered by blists - more mailing lists