[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160427081059.GA429@swordfish>
Date: Wed, 27 Apr 2016 17:10:59 +0900
From: Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>
To: Minchan Kim <minchan@...nel.org>
Cc: Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org,
Sergey Senozhatsky <sergey.senozhatsky@...il.com>
Subject: Re: zram: per-cpu compression streams
On (04/27/16 16:55), Minchan Kim wrote:
[..]
> > > Could you test concurrent mem hogger with fio rather than pre-fault before fio test
> > > in next submit?
> >
> > this test will not prove anything, unfortunately. I performed it;
> > and it's impossible to guarantee even remotely stable results.
> > mem-hogger process can spend on pre-fault from 41 to 81 seconds;
> > so I'm quite sceptical about the actual value of this test.
> >
> > > > considering buffer_compress_percentage=11, the box was under somewhat
> > > > heavy pressure.
> > > >
> > > > now, the results
> > >
> > > Yeb, Even, recompression case is fater than old but want to see more heavy memory
> > > pressure case and the ratio I mentioned above.
> >
> > I did quite heavy testing over the last 7 days, with numerous OOM kills
> > and OOM panics.
>
> Okay, I think it's worth to merge enough and see the result.
> Please send formal patch which has recompression stat. ;-)
correction: those 41-81s spikes in mem-hogger were observed under
different scenario: 10GB zram with 6GB mem-hogger on a 4GB system.
I'll do another round of tests (with parallel mem-hogger pre-fault
and 4GB/4GB zram/mem-hogger split) and collect the number that you
asked for.
thanks!
-ss
Powered by blists - more mailing lists