[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170426042826.GD673@jagdpanzerIV.localdomain>
Date: Wed, 26 Apr 2017 13:28:26 +0900
From: Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>
To: js1304@...il.com
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Minchan Kim <minchan@...nel.org>,
Sergey Senozhatsky <sergey.senozhatsky@...il.com>,
linux-kernel@...r.kernel.org, kernel-team@....com,
Joonsoo Kim <iamjoonsoo.kim@....com>
Subject: Re: [PATCH v4 2/4] zram: implement deduplication in zram
On (04/26/17 09:52), js1304@...il.com wrote:
[..]
> +struct zram_hash {
> + spinlock_t lock;
> + struct rb_root rb_root;
> };
just a note.
we can easily have N CPUs spinning on ->lock for __zram_dedup_get() lookup,
which can invole a potentially slow zcomp_decompress() [zlib, for example,
with 64k pages] and memcmp(). the larger PAGE_SHIFT is, the more serialized
IOs become. in theory, at least.
CPU0 CPU1 ... CPUN
__zram_bvec_write() __zram_bvec_write() __zram_bvec_write()
zram_dedup_find() zram_dedup_find() zram_dedup_find()
spin_lock(&hash->lock);
spin_lock(&hash->lock); spin_lock(&hash->lock);
__zram_dedup_get()
zcomp_decompress()
...
so may be there is a way to use read-write lock instead on spinlock for hash
and reduce write/read IO serialization.
-ss
Powered by blists - more mailing lists