lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 09 Sep 2013 15:46:13 +0200
From:	Jerome Marchand <jmarchan@...hat.com>
To:	Dan Carpenter <dan.carpenter@...cle.com>
CC:	Sergey Senozhatsky <sergey.senozhatsky@...il.com>,
	Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
	devel@...verdev.osuosl.org, Minchan Kim <minchan@...nel.org>,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/2] staging: zram: minimize `slot_free_lock' usage (v2)

On 09/09/2013 03:21 PM, Dan Carpenter wrote:
> On Mon, Sep 09, 2013 at 03:49:42PM +0300, Sergey Senozhatsky wrote:
>>>> Calling handle_pending_slot_free() for every RW operation may
>>>> cause unneccessary slot_free_lock locking, because most likely
>>>> process will see NULL slot_free_rq. handle_pending_slot_free()
>>>> only when current detects that slot_free_rq is not NULL.
>>>>
>>>> v2: protect handle_pending_slot_free() with zram rw_lock.
>>>>
>>>
>>> zram->slot_free_lock protects zram->slot_free_rq but shouldn't the zram
>>> rw_lock be wrapped around the whole operation like the original code
>>> does?  I don't know the zram code, but the original looks like it makes
>>> sense but in this one it looks like the locks are duplicative.
>>>
>>> Is the down_read() in the original code be changed to down_write()?
>>>
>>
>> I'm not touching locking around existing READ/WRITE commands.
>>
> 
> Your patch does change the locking because now instead of taking the
> zram lock once it takes it and then drops it and then retakes it.  This
> looks potentially racy to me but I don't know the code so I will defer
> to any zram maintainer.

You're right. Nothing prevents zram_slot_free_notify() to repopulate the
free slot queue while we drop the lock.

Actually, the original code is already racy. handle_pending_slot_free()
modifies zram->table while holding only a read lock. It needs to hold a
write lock to do that. Using down_write for all requests would obviously
fix that, but at the cost of read performance.

> 
> 1) You haven't given us any performance numbers so it's not clear if the
>    locking is even a problem.
> 
> 2) The v2 patch introduces an obvious deadlock in zram_slot_free()
>    because now we take the rw_lock twice.  Fix your testing to catch
>    this kind of bug next time.
> 
> 3) Explain why it is safe to test zram->slot_free_rq when we are not
>    holding the lock.  I think it is unsafe.  I don't want to even think
>    about it without the numbers.
> 
> regards,
> dan carpenter
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ