lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 10 Aug 2015 09:32:30 +0900
From:	Joonsoo Kim <iamjoonsoo.kim@....com>
To:	Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	Minchan Kim <minchan@...nel.org>,
	Nitin Gupta <ngupta@...are.org>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] zram: fix possible race when checking idle_strm

On Fri, Aug 07, 2015 at 06:58:16PM +0900, Sergey Senozhatsky wrote:
> On (08/07/15 18:14), Sergey Senozhatsky wrote:
> > hm... I need to think about it more.
> > 
> > we do wake_up every time we put stream back to the list
> > 
> > zcomp_strm_multi_release():
> > 
> >         spin_lock(&zs->strm_lock);
> >         if (zs->avail_strm <= zs->max_strm) {
> >                 list_add(&zstrm->list, &zs->idle_strm);
> >                 spin_unlock(&zs->strm_lock);
> >                 wake_up(&zs->strm_wait);
> >                 return;
> >         }
> > 
> > 
> > but I can probably see what you mean... in some very extreme case,
> > though. I can't even formulate it... eh... we use a multi stream
> > backend with ->max_strm == 1 and there are two processes, one
> > just falsely passed the wait_event() `if (condition)' check, the
> > other one just put stream back to ->idle_strm and called wake_up(),
> > but the first process hasn't yet executed prepare_to_wait_event()
> > so it might miss a wakeup. and there should be no other process
> > doing read or write operation. otherwise, there will be wakeup
> > eventually.
> > 
> > is this the case you were thinking of?... then yes, this spinlock
> > may help.
> > 
> 
> on the other hand... it's actually
> 
> 	wait_event() is
> 
> 	if (condition)
> 		break;
> 	prepare_to_wait_event(&wq, &__wait, state)
> 	if (condition)
> 		break;
> 	schedule();
> 
> if first condition check was false and we missed a wakeup call between
> first condition and prepare_to_wait_event(), then second condition
> check should do the trick I think (or you expect that second condition
> check may be wrongly pre-fetched or something).

Hello, Sergey.

This is what I thought.
I expected that second condition can be false if compiler reuse result
of first check for optimization. I guess that there is no prevention
for this kind of optimization.

So, following is the problem sequence I thought.
T1 means thread 1, T2 means another thread, 2.

<T1-1> check if idle_strm is empty or not with holding the lock
<T1-2> It is empty so do spin_unlock and run wait_event macro
<T1-3> check if idle_strm is empty or not
<T1-4> It is still empty

<T2-1> do strm release
<T2-2> call wake_up

<T1-5> add T1 to wait queue
<T1-6> check if idle_strm is empty or not
<T1-7> compiler reuse <T1-4>'s result or CPU just fetch cached
result so T1 starts waiting

In this case, T1 can be sleep permanently. To prevent compiler
optimization or fetching cached value, we need a lock here.

Thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ