lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160118073939.GA30668@swordfish>
Date:	Mon, 18 Jan 2016 16:39:39 +0900
From:	Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>
To:	Minchan Kim <minchan@...nel.org>
Cc:	Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>,
	Junil Lee <junil0814.lee@....com>, ngupta@...are.org,
	akpm@...ux-foundation.org, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org, vbabka@...e.cz
Subject: Re: [PATCH v3] zsmalloc: fix migrate_zspage-zs_free race condition

On (01/18/16 16:11), Minchan Kim wrote:
[..]
> > so, even if clear_bit_unlock/test_and_set_bit_lock do smp_mb or
> > barrier(), there is no corresponding barrier from record_obj()->WRITE_ONCE().
> > so I don't think WRITE_ONCE() will help the compiler, or am I missing
> > something?
> 
> We need two things

thanks.

> 1. compiler barrier

um... probably gcc can reorder that sequence to something like this

	*handle = obj_malloc()   /* unpin the object */
	zs_object_copy(*handle, used_obj, class) /* now use it*/

ok.


> 2. memory barrier.
> 
> As compiler barrier, WRITE_ONCE works to prevent store tearing here
> by compiler.
> However, if we omit unpin_tag here, we lose memory barrier(e,g, smp_mb)
> so another CPU could see stale data caused CPU memory reordering.

oh... good find! lost release semantic of unpin_tag()...

	-ss

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ