[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170426060425.GC29773@js1304-desktop>
Date: Wed, 26 Apr 2017 15:04:26 +0900
From: Joonsoo Kim <js1304@...il.com>
To: Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Minchan Kim <minchan@...nel.org>,
Sergey Senozhatsky <sergey.senozhatsky@...il.com>,
linux-kernel@...r.kernel.org, kernel-team@....com
Subject: Re: [PATCH v4 2/4] zram: implement deduplication in zram
On Wed, Apr 26, 2017 at 01:02:43PM +0900, Sergey Senozhatsky wrote:
> On (04/26/17 09:52), js1304@...il.com wrote:
> [..]
> > <no-dedup>
> > Elapsed time: out/host: 88 s
> > mm_stat: 8834420736 3658184579 3834208256 0 3834208256 32889 0 0 0
> >
> > <dedup>
> > Elapsed time: out/host: 100 s
> > mm_stat: 8832929792 3657329322 2832015360 0 2832015360 32609 0 952568877 80880336
> >
> > It shows performance degradation roughly 13% and save 24% memory. Maybe,
> > it is due to overhead of calculating checksum and comparison.
>
> I like the patch set, and it makes sense. the benefit is, obviously,
> case-by-case. on my system I've managed to save just 60MB on a 2.7G
> data set, which is far less than I was hoping to save :)
>
>
> I usually do DIRECT IO fio performance test. JFYI, the results
> were as follows:
Could you share your fio test setting? I will try to re-generate the
result and analyze it.
I guess that contention happens due to same data page. Could you check
it?
Thanks.
Powered by blists - more mailing lists