[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130910143416.GC2270@swordfish>
Date: Tue, 10 Sep 2013 17:34:16 +0300
From: Sergey Senozhatsky <sergey.senozhatsky@...il.com>
To: Jerome Marchand <jmarchan@...hat.com>
Cc: Dan Carpenter <dan.carpenter@...cle.com>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
devel@...verdev.osuosl.org, Minchan Kim <minchan@...nel.org>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/2] staging: zram: minimize `slot_free_lock' usage (v2)
On (09/09/13 18:10), Jerome Marchand wrote:
> On 09/09/2013 03:46 PM, Jerome Marchand wrote:
> > On 09/09/2013 03:21 PM, Dan Carpenter wrote:
> >> On Mon, Sep 09, 2013 at 03:49:42PM +0300, Sergey Senozhatsky wrote:
> >>>>> Calling handle_pending_slot_free() for every RW operation may
> >>>>> cause unneccessary slot_free_lock locking, because most likely
> >>>>> process will see NULL slot_free_rq. handle_pending_slot_free()
> >>>>> only when current detects that slot_free_rq is not NULL.
> >>>>>
> >>>>> v2: protect handle_pending_slot_free() with zram rw_lock.
> >>>>>
> >>>>
> >>>> zram->slot_free_lock protects zram->slot_free_rq but shouldn't the zram
> >>>> rw_lock be wrapped around the whole operation like the original code
> >>>> does? I don't know the zram code, but the original looks like it makes
> >>>> sense but in this one it looks like the locks are duplicative.
> >>>>
> >>>> Is the down_read() in the original code be changed to down_write()?
> >>>>
> >>>
> >>> I'm not touching locking around existing READ/WRITE commands.
> >>>
> >>
> >> Your patch does change the locking because now instead of taking the
> >> zram lock once it takes it and then drops it and then retakes it. This
> >> looks potentially racy to me but I don't know the code so I will defer
> >> to any zram maintainer.
> >
> > You're right. Nothing prevents zram_slot_free_notify() to repopulate the
> > free slot queue while we drop the lock.
> >
> > Actually, the original code is already racy. handle_pending_slot_free()
> > modifies zram->table while holding only a read lock. It needs to hold a
> > write lock to do that. Using down_write for all requests would obviously
> > fix that, but at the cost of read performance.
>
> Now I think we can drop the call to handle_pending_slot_free() in
> zram_bvec_rw() altogether. As long as the write lock is held when
> handle_pending_slot_free() is called, there is no race. It's no different
> from any write request and the current code handles R/W concurrency
> already.
Yes, I think that can work.
To summarize, there should be 3 patches:
1) handle_pending_slot_free() in zram_bvec_rw() (as suggested by Jerome Marchand)
2) handle_pending_slot_free() race with reset (found by Dan Carpenter)
3) drop init_done and use init_done()
I'll prepare a patches later today.
-ss
> Jerome
>
> >
> >>
> >> 1) You haven't given us any performance numbers so it's not clear if the
> >> locking is even a problem.
> >>
> >> 2) The v2 patch introduces an obvious deadlock in zram_slot_free()
> >> because now we take the rw_lock twice. Fix your testing to catch
> >> this kind of bug next time.
> >>
> >> 3) Explain why it is safe to test zram->slot_free_rq when we are not
> >> holding the lock. I think it is unsafe. I don't want to even think
> >> about it without the numbers.
> >>
> >> regards,
> >> dan carpenter
> >>
> >
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> > the body of a message to majordomo@...r.kernel.org
> > More majordomo info at http://vger.kernel.org/majordomo-info.html
> > Please read the FAQ at http://www.tux.org/lkml/
> >
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists