[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160513072006.GA21484@bbox>
Date: Fri, 13 May 2016 16:20:06 +0900
From: Minchan Kim <minchan@...nel.org>
To: Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>
CC: Sergey Senozhatsky <sergey.senozhatsky@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>,
<linux-kernel@...r.kernel.org>, <linux-mm@...ck.org>
Subject: Re: [PATCH] zram: introduce per-device debug_stat sysfs node
On Fri, May 13, 2016 at 04:05:53PM +0900, Sergey Senozhatsky wrote:
> On (05/13/16 15:58), Sergey Senozhatsky wrote:
> > On (05/13/16 15:23), Minchan Kim wrote:
> > [..]
> > > @@ -737,12 +737,12 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
> > > zcomp_strm_release(zram->comp, zstrm);
> > > zstrm = NULL;
> > >
> > > - atomic64_inc(&zram->stats.num_recompress);
> > > -
> > > handle = zs_malloc(meta->mem_pool, clen,
> > > GFP_NOIO | __GFP_HIGHMEM);
> > > - if (handle)
> > > + if (handle) {
> > > + atomic64_inc(&zram->stats.num_recompress);
> > > goto compress_again;
> > > + }
> >
> > not like a real concern...
> >
> > the main (and only) purpose of num_recompress is to match performance
> > slowdowns and failed fast write paths (when the first zs_malloc() fails).
> > this matching is depending on successful second zs_malloc(), but if it's
> > also unsuccessful we would only increase failed_writes; w/o increasing
> > the failed fast write counter, while we actually would have failed fast
> > write and extra zs_malloc() [unaccounted in this case]. yet it's probably
> > a bit unlikely to happen, but still. well, just saying.
>
> here I assume that the biggest contributor to re-compress latency is
> enabled preemption after zcomp_strm_release() and this second zs_malloc().
> the compression itself of a PAGE_SIZE buffer should be fast enough. so IOW
> we would pass down the slow path, but would not account it.
biggest contributors are 1. direct reclaim by second zsmalloc call +
2. recompression overhead.
If zram start to support high comp ratio but slow speed algorithm like zlib
2 might be higher than 1 in the future so let's not ignore 2 overhead.
Although 2 is smaller, your patch just accounts only direct reclaim but my
suggestion can count both 1 and 2 so isn't it better?
I don't know why it's arguable here. :)
>
> -ss
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@...ck.org. For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@...ck.org"> email@...ck.org </a>
Powered by blists - more mailing lists