lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 16 Jan 2016 19:05:57 +0900
From:	Sergey Senozhatsky <sergey.senozhatsky@...il.com>
To:	Vlastimil Babka <vbabka@...e.cz>
Cc:	Sergey Senozhatsky <sergey.senozhatsky@...il.com>,
	Minchan Kim <minchan@...nel.org>,
	Junil Lee <junil0814.lee@....com>, ngupta@...are.org,
	sergey.senozhatsky.work@...il.com, akpm@...ux-foundation.org,
	linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] zsmalloc: fix migrate_zspage-zs_free race condition

On (01/16/16 09:16), Vlastimil Babka wrote:
[..]
> BTW, couldn't the correct fix also just look like this?
> 
> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> index 9f15bdd9163c..43f743175ede 100644
> --- a/mm/zsmalloc.c
> +++ b/mm/zsmalloc.c
> @@ -1635,8 +1635,8 @@ static int migrate_zspage(struct zs_pool *pool, struct
> size_class *class,
>                 free_obj = obj_malloc(d_page, class, handle);
>                 zs_object_copy(free_obj, used_obj, class);
>                 index++;
> +               /* This also effectively unpins the handle */
>                 record_obj(handle, free_obj);
> -               unpin_tag(handle);
>                 obj_free(pool, class, used_obj);
>         }

I think this will work.


> But I'd still recommend WRITE_ONCE in record_obj(). And I'm not even sure it's
> safe on all architectures to do a simple overwrite of a word against somebody
> else trying to lock a bit there?

hm... for example, generic bitops from include/asm-generic/bitops/atomic.h
use _atomic_spin_lock_irqsave()

 #define test_and_set_bit_lock(nr, addr)  test_and_set_bit(nr, addr)

 static inline int test_and_set_bit(int nr, volatile unsigned long *addr)
 {
         unsigned long mask = BIT_MASK(nr);
         unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr);
         unsigned long old;
         unsigned long flags;

         _atomic_spin_lock_irqsave(p, flags);
         old = *p;
         *p = old | mask;
         _atomic_spin_unlock_irqrestore(p, flags);

         return (old & mask) != 0;
 }

so overwriting it from the outside world (w/o taking _atomic_spin_lock_irqsave(p))
can theoretically be tricky in some cases.

	-ss

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ