[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170426060816.GD29773@js1304-desktop>
Date: Wed, 26 Apr 2017 15:08:18 +0900
From: Joonsoo Kim <js1304@...il.com>
To: Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Minchan Kim <minchan@...nel.org>,
Sergey Senozhatsky <sergey.senozhatsky@...il.com>,
linux-kernel@...r.kernel.org, kernel-team@....com
Subject: Re: [PATCH v4 2/4] zram: implement deduplication in zram
On Wed, Apr 26, 2017 at 01:28:26PM +0900, Sergey Senozhatsky wrote:
> On (04/26/17 09:52), js1304@...il.com wrote:
> [..]
> > +struct zram_hash {
> > + spinlock_t lock;
> > + struct rb_root rb_root;
> > };
>
> just a note.
>
> we can easily have N CPUs spinning on ->lock for __zram_dedup_get() lookup,
> which can invole a potentially slow zcomp_decompress() [zlib, for example,
> with 64k pages] and memcmp(). the larger PAGE_SHIFT is, the more serialized
> IOs become. in theory, at least.
>
> CPU0 CPU1 ... CPUN
>
> __zram_bvec_write() __zram_bvec_write() __zram_bvec_write()
> zram_dedup_find() zram_dedup_find() zram_dedup_find()
> spin_lock(&hash->lock);
> spin_lock(&hash->lock); spin_lock(&hash->lock);
> __zram_dedup_get()
> zcomp_decompress()
> ...
>
>
> so may be there is a way to use read-write lock instead on spinlock for hash
> and reduce write/read IO serialization.
In fact, dedup release hash->lock before doing zcomp_decompress(). So,
above contention cannot happen.
However, contention still possible when traversing the rb_tree. If
your fio shows that contention, I will change it to read-write lock.
Thanks.
Powered by blists - more mailing lists