lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 22 Feb 2016 11:57:09 +0900
From:	Minchan Kim <minchan@...nel.org>
To:	Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>
Cc:	Sergey Senozhatsky <sergey.senozhatsky@...il.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Joonsoo Kim <js1304@...il.com>, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org
Subject: Re: [RFC][PATCH v2 2/3] zram: use zs_get_huge_class_size_watermark()

On Mon, Feb 22, 2016 at 10:59:12AM +0900, Sergey Senozhatsky wrote:
> On (02/22/16 10:27), Minchan Kim wrote:
> [..]
> > > zram asks to store a PAGE_SIZE sized object, what zsmalloc can
> > > possible do about this?
> > 
> > zsmalloc can increase ZS_MAX_ZSPAGE_ORDER or can save metadata in
> > the extra space. In fact, I tried interlink approach long time ago.
> > For example, class-A -> class-B 
> > 
> >         A = x, B = (4096 - y) >= x
> >
> > The problem was class->B zspage consumes memory although there is
> > no object in the zspage because class-A object in the extra space
> > of class-B pin the class-B zspage.
> 
> I thought about it too -- utilizing 'unused space' to store there
> smaller objects. and I think it potentially has more problems.
> compaction (and everything) seem to be much simpler when we have only
> objects of size-X in class_size X.
> 
> > I prefer your ZS_MAX_ZSPAGE_ORDER increaing approach but as I told
> > in that thread, we should prepare dynamic creating of sub-page
> > in zspage.
> 
> I agree that in general dynamic class page allocation sounds
> interesting enough.
> 
> > > > Having said that, I agree your claim that uncompressible pages
> > > > are pain. I want to handle the problem as multiple-swap apparoach.
> > > 
> > > zram is not just for swapping. as simple as that.
> > 
> > Yes, I mean if we have backing storage, we could mitigate the problem
> > like the mentioned approach. Otherwise, we should solve it in allocator
> > itself and you suggested the idea and I commented first step.
> > What's the problem, now?
> 
> well, I didn't say I have problems.
> so you want a backing device that will keep only 'bad compression'
> objects and use zsmalloc to keep there only 'good compression' objects?
> IOW, no huge classes in zsmalloc at all? well, that can work out. it's
> a bit strange though that to solve zram-zsmalloc issues we would ask
> someone to create a additional device. it looks (at least for now) that
> we can address those issues in zram-zsmalloc entirely; w/o user
> intervention or a 3rd party device.

Agree. That's what I want. zram shouldn't be aware of allocator's
internal implementation. IOW, zsmalloc should handle it without
exposing any internal limitation.
Backing device issue is orthogonal but what I said about thing
was it could solve the issue too without exposing zsmalloc's
limitation to the zram.

Let's summary my points in here.

Let's make zsmalloc smarter to reduce wasted space. One of option is
dynamic page creation which I agreed.

Before the feature, we should test how memory footprint is bigger
without the feature if we increase ZS_MAX_ZSPAGE_ORDER.
If it's not big, we could go with your patch easily without adding
more complex stuff(i.e, dynamic page creation).

Please, check max_used_pages rather than mem_used_total for seeing
memory footprint at the some moment and test very fragmented scenario
(creating files and free part of files) rather than just full coping.

If memory footprint is high, we can decide to go dynamic page
creation.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ