[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <52D6530B.30109@redhat.com>
Date: Wed, 15 Jan 2014 10:21:15 +0100
From: Jerome Marchand <jmarchan@...hat.com>
To: Minchan Kim <minchan@...nel.org>
CC: Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org, Nitin Gupta <ngupta@...are.org>,
Sergey Senozhatsky <sergey.senozhatsky@...il.com>
Subject: Re: [PATCH v2 0/4]zram: locking redesign
On 01/15/2014 02:11 AM, Minchan Kim wrote:
> Currently, zram->lock rw_semaphore is coarse-grained so it hurts
> for scalability.
> This patch try to enhance it with remove the lock in read path.
>
> [1] uses atomic opeartion so it removes dependency of 32bit stat
> from zram->lock.
> [2] introduces table own lock instead of relying on zram->lock.
> [3] remove free pending slot mess so it makes core very clean.
> [4] finally removes zram->lock in read path and changes it with mutex.
>
> So, output is wonderful. read/write mixed workload performs well
> 11 times than old and write concurrency is also enhanced because
> mutex supports SPIN_ON_OWNER while rw_semaphore doesn't yet.
> (I know recenty there were some effort to enhance it for rw_semaphore
> from Tim Chen but not sure it got merged. Anyway, we don't need it
> any more and there is no reason to prevent read-write concurrency)
>
> Thanks.
>
> Minchan Kim (4):
> [1] zram: use atomic operation for stat
> [2] zram: introduce zram->tb_lock
> [3] zram: remove workqueue for freeing removed pending slot
> [4] zram: Remove zram->lock in read path and change it with mutex
>
> drivers/staging/zram/zram_drv.c | 117 ++++++++++++++++------------------------
> drivers/staging/zram/zram_drv.h | 27 +++-------
> 2 files changed, 51 insertions(+), 93 deletions(-)
>
The new locking scheme seems sound to me.
Acked-by: Jerome Marchand <jmarchan@...hat.com>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists