lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160513070553.GC615@swordfish>
Date:	Fri, 13 May 2016 16:05:53 +0900
From:	Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>
To:	Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>
Cc:	Minchan Kim <minchan@...nel.org>,
	Sergey Senozhatsky <sergey.senozhatsky@...il.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH] zram: introduce per-device debug_stat sysfs node

On (05/13/16 15:58), Sergey Senozhatsky wrote:
> On (05/13/16 15:23), Minchan Kim wrote:
> [..]
> > @@ -737,12 +737,12 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
> >  		zcomp_strm_release(zram->comp, zstrm);
> >  		zstrm = NULL;
> >  
> > -		atomic64_inc(&zram->stats.num_recompress);
> > -
> >  		handle = zs_malloc(meta->mem_pool, clen,
> >  				GFP_NOIO | __GFP_HIGHMEM);
> > -		if (handle)
> > +		if (handle) {
> > +			atomic64_inc(&zram->stats.num_recompress);
> >  			goto compress_again;
> > +		}
> 
> not like a real concern...
> 
> the main (and only) purpose of num_recompress is to match performance
> slowdowns and failed fast write paths (when the first zs_malloc() fails).
> this matching is depending on successful second zs_malloc(), but if it's
> also unsuccessful we would only increase failed_writes; w/o increasing
> the failed fast write counter, while we actually would have failed fast
> write and extra zs_malloc() [unaccounted in this case]. yet it's probably
> a bit unlikely to happen, but still. well, just saying.

here I assume that the biggest contributor to re-compress latency is
enabled preemption after zcomp_strm_release() and this second zs_malloc().
the compression itself of a PAGE_SIZE buffer should be fast enough. so IOW
we would pass down the slow path, but would not account it.

	-ss

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ