lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 2 May 2016 16:25:08 +0900
From:	Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>
To:	Minchan Kim <minchan@...nel.org>
Cc:	Sergey Senozhatsky <sergey.senozhatsky@...il.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	linux-kernel@...r.kernel.org,
	Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>
Subject: Re: [PATCH 2/2] zram: user per-cpu compression streams

Hello Minchan,

On (05/02/16 15:23), Minchan Kim wrote:
[..]
> Thanks for the your great test.
> Today, I tried making heavy memory pressure to make dobule compression
> but it was not easy so I guess it's really hard to get in real practice
> I hope it doesn't make any regression. ;-)

:) it is quite 'hard', indeed. huge 're-compression' numbers usually
come together with OOM-kill and OOM-panic. per my 'testing experience'.

I'm thinking about making that test a 'general zram' test and post
it somewhere on github/etc., so we it'll be easier to measure and
compare things.

> >  bool zcomp_set_max_streams(struct zcomp *comp, int num_strm)
> >  {
> > -	return comp->set_max_streams(comp, num_strm);
> > +	return true;
> >  }
> 
> I like this part. We don't need to remove set_max_stream interface
> right now. Because percpu might make regression if double comp raio
> is high. If so, we should roll back so let's wait for two year and
> if anyting is not wrong, then we could deprecate multi stream
> interfaces. I hope you are on same page.

sure. I did this intentionally, because I really don't want to create a
mess of attrs that are about to retire. we have N attrs that will disappear
or see a downcast (and we warn users in kernel log) next summer (4.10 or
4.11, IIRC). if we add max_comp_stream to the list now then we either should
abandon our regular "2 years of retirement warn" rule and drop max_comp_stream
only after 1 year (because it has 1 year of overlap with already scheduled
for removal attrs) or complicate things for zram users: attrs will be dropped
in different kernel releases and users are advised to keep that in mind. I
think it's fine to have it as a NOOP attr as of now and deprecate it during
the next 'attr cleanup' 2 years cycle. yes, we are on the same page ;)

> > @@ -356,10 +226,7 @@ struct zcomp *zcomp_create(const char *compress, int max_strm)
> 
> Trivial:
> We could remove max_strm now and change description.

oh, yes.

> > +		zcomp_strm_release(zram->comp, zstrm);
> > +		zstrm = NULL;
> > +
> > +		handle = zs_malloc(meta->mem_pool, clen,
> > +				GFP_NOIO | __GFP_HIGHMEM);
> > +		if (handle)
> > +			goto compress_again;
> 
> We can avoid page_zero_filled in second trial but not sure it is worth
> to do because in my experiment, not easy to make double compression.
> If I did, the ratio is really small compared total_write so it doesn't
> need to add new branch to detect it in normal path.
>
> Acutally, I asked adding dobule compression stat to you. It seems you
> forgot but okay. Because I want to change stat code and contents so
> I will send a patch after I send rework about stat. :)

aha... I misunderstood you. I thought you talked about the test results
in particular, not about zram stats in general. hm, given that we don't
close the door for 'idle compression streams list' yet, and may revert
per-cpu streams, may be we can introduce this stat in 4.8? otherwise, in
the worst case, we will have to trim a user visible stat file once again
IF we, for some reason, will decide to return idle list back (hopefully
we will not). does it sound OK to you to not touch the stat file now?

> Thanks for the great work, Sergey!

thanks!

> Acked-by: Minchan Kim <minchan@...nel.org>

thanks!

	-ss

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ