[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20140115092558.GB2178@swordfish>
Date: Wed, 15 Jan 2014 12:25:58 +0300
From: Sergey Senozhatsky <sergey.senozhatsky@...il.com>
To: Minchan Kim <minchan@...nel.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org, Nitin Gupta <ngupta@...are.org>,
Jerome Marchand <jmarchan@...hat.com>
Subject: Re: [PATCH v2 0/4]zram: locking redesign
On (01/15/14 10:11), Minchan Kim wrote:
> Currently, zram->lock rw_semaphore is coarse-grained so it hurts
> for scalability.
> This patch try to enhance it with remove the lock in read path.
>
> [1] uses atomic opeartion so it removes dependency of 32bit stat
> from zram->lock.
> [2] introduces table own lock instead of relying on zram->lock.
> [3] remove free pending slot mess so it makes core very clean.
> [4] finally removes zram->lock in read path and changes it with mutex.
>
> So, output is wonderful. read/write mixed workload performs well
> 11 times than old and write concurrency is also enhanced because
> mutex supports SPIN_ON_OWNER while rw_semaphore doesn't yet.
> (I know recenty there were some effort to enhance it for rw_semaphore
> from Tim Chen but not sure it got merged. Anyway, we don't need it
> any more and there is no reason to prevent read-write concurrency)
>
> Thanks.
>
Acked-by: Sergey Senozhatsky <sergey.senozhatsky@...il.com>
> Minchan Kim (4):
> [1] zram: use atomic operation for stat
> [2] zram: introduce zram->tb_lock
> [3] zram: remove workqueue for freeing removed pending slot
> [4] zram: Remove zram->lock in read path and change it with mutex
>
> drivers/staging/zram/zram_drv.c | 117 ++++++++++++++++------------------------
> drivers/staging/zram/zram_drv.h | 27 +++-------
> 2 files changed, 51 insertions(+), 93 deletions(-)
>
> --
> 1.8.5.2
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists