[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150921041837.GF27729@bbox>
Date: Mon, 21 Sep 2015 13:18:37 +0900
From: Minchan Kim <minchan@...nel.org>
To: Vitaly Wool <vitalywool@...il.com>
Cc: Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>,
Dan Streetman <ddstreet@...e.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Sergey Senozhatsky <sergey.senozhatsky@...il.com>,
LKML <linux-kernel@...r.kernel.org>, linux-mm@...ck.org
Subject: Re: [PATCH 0/2] prepare zbud to be used by zram as underlying
allocator
Hello Vitaly,
On Thu, Sep 17, 2015 at 12:26:12PM +0200, Vitaly Wool wrote:
> On Thu, Sep 17, 2015 at 1:30 AM, Sergey Senozhatsky
> <sergey.senozhatsky.work@...il.com> wrote:
>
> >
> > just a side note,
> > I'm afraid this is not how it works. numbers go first, to justify
> > the patch set.
I totally agree Sergey's opinion.
> >
>
> These patches are extension/alignment patches, why would anyone need
> to justify that?
Sorry, because you wrote up "zram" in the title.
As I said earlier, we need several numbers to investigate.
First of all, what is culprit of your latency?
It seems you are thinking about compaction. so compaction what?
Frequent scanning? lock collision? or frequent sleeping in compaction
code somewhere? And then why does zbud solve it? If we use zbud for zram,
we lose memory efficiency so there is something to justify it.
The reason I am asking is I have investigated similar problems
in android and other plaforms and the reason of latency was not zsmalloc
but agressive high-order allocations from subsystems, watermark check
race, deferring of compaction, LMK not working and too much swapout so
it causes to reclaim lots of page cache pages which was main culprit
in my cases. When I checks with perf, compaction stall count is increased,
the time spent in there is not huge so it was not main factor of latency.
Your problem might be differnt with me so convincing us, you should
give us real data and investigation story.
Thanks.
>
> But just to help you understand where I am coming from, here are some numbers:
> zsmalloc zbud
> kswapd_low_wmark_hit_quickly 4513 5696
> kswapd_high_wmark_hit_quickly 861 902
> allocstall 2236 1122
> pgmigrate_success 78229 31244
> compact_stall 1172 634
> compact_fail 194 95
> compact_success 464 210
>
> These are results from an Android device having run 3 'monkey' tests
> each 20 minutes, with user switch to guest and back in between.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists