lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160526003241.GA9661@bbox>
Date:	Thu, 26 May 2016 09:32:41 +0900
From:	Minchan Kim <minchan@...nel.org>
To:	Sergey Senozhatsky <sergey.senozhatsky@...il.com>
Cc:	Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>,
	Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH v6 11/12] zsmalloc: page migration support

Hi Sergey,

On Thu, May 26, 2016 at 12:23:45AM +0900, Sergey Senozhatsky wrote:
> Hello Minchan,
> 
> On (05/25/16 14:14), Minchan Kim wrote:
> [..]
> > > > do you also want to kick the deferred page release from the shrinker
> > > > callback, for example?
> > > 
> > > Yeb, it can be. I will do it at next revision. :)
> > > Thanks!
> > > 
> > 
> > I tried it now but I feel strongly we want to fix shrinker first.
> > Now, shrinker doesn't consider VM's request(i.e., sc->nr_to_scan) but
> > shrink all objects which could make latency huge.
> 
> hm... may be.
> 
> I only briefly thought about it a while ago; and have no real data on
> hands. it was something as follows:
> between zs_shrinker_count() and zs_shrinker_scan() a lot can change;
> and _theoretically_ attempting to zs_shrinker_scan() even a smaller
> sc->nr_to_scan still may result in a "full" pool scan, taking all of
> the classes ->locks one by one just because classes are not the same
> as a moment ago. which is even more probable, I think, once the system
> is getting low on memory and begins to swap out, for instance. because
> in the latter case we increase the number of writes to zspool and,
> thus, reduce its chances to be compacted. if the system would still
> demand free memory, then it'd keep calling zs_shrinker_count() and
> zs_shrinker_scan() on us; at some point, I think, zs_shrinker_count()
> would start returning 0. ...if some of the classes would have huge
> fragmentation then we'd keep these class' ->locks for some time,
> moving objects. other than that we probably would just iterate the
> classes.
> 
> purely theoretical.
> 
> do you have any numbers?

Unfortunately, I don't have now. However, I don't feel we need a data for
that because *unbounded work* within VM interaction context is bad. ;-)

> 
> hm, probably it makes sense to change it. but if the change will
> replace "1 full pool scan" to "2 scans of 1/2 of pool's classes",
> then I'm less sure.

Other shrinker is on same page. They have *cache* which is helpful for
performance but if it's not hot, it can lose performance if memory
pressure is severe. For the balance, comprimise approach is shrinker.

We can see fragement space in zspage as wasted memory which is bad
on the other hand we can see it as cache to store upcoming compressed
page. So, too much freeing can hurt the performance so, let's obey
VM's rule. If memory pressure goes severe, they want to reclaim more
pages with proportional of memory pressure.

> 
> > I want to fix it as another issue and then adding ZS_EMPTY pool pages
> > purging logic based on it because many works for zsmalloc stucked
> > with this patchset now which churns old code heavily. :(
> 
> 	-ss

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ