[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150604062712.GJ2241@blaptop>
Date: Thu, 4 Jun 2015 15:27:12 +0900
From: Minchan Kim <minchan@...nel.org>
To: Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>
Cc: Sergey Senozhatsky <sergey.senozhatsky@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [RFC][PATCH 07/10] zsmalloc: introduce auto-compact support
On Thu, Jun 04, 2015 at 02:30:56PM +0900, Sergey Senozhatsky wrote:
> On (06/04/15 13:57), Minchan Kim wrote:
> > On Sat, May 30, 2015 at 12:05:25AM +0900, Sergey Senozhatsky wrote:
> > > perform class compaction in zs_free(), if zs_free() has created
> > > a ZS_ALMOST_EMPTY page. this is the most trivial `policy'.
> >
> > Finally, I got realized your intention.
> >
> > Actually, I had a plan to add /sys/block/zram0/compact_threshold_ratio
> > which means to compact automatically when compr_data_size/mem_used_total
> > is below than the threshold but I didn't try because it could be done
> > by usertool.
> >
> > Another reason I didn't try the approach is that it could scan all of
> > zs_objects repeatedly withtout any freeing zspage in some corner cases,
> > which could be big overhead we should prevent so we might add some
> > heuristic. as an example, we could delay a few compaction trial when
> > we found a few previous trials as all fails.
>
> this is why I use zs_can_compact() -- to evict from zs_compact() as soon
> as possible. so useless scans are minimized (well, at least expected). I'm
> also thinking of a threshold-based solution -- do class auto-compaction
> only if we can free X pages, for example.
>
> the problem of compaction is that there is no compaction until you trigger
> it.
>
> and fragmented classes are not necessarily a win. if writes don't happen
> to a fragmented class-X (and we basically can't tell if they will, nor we
> can estimate; it's up to I/O and data patterns, compression algorithm, etc.)
> then class-X stays fragmented w/o any use.
The problem is migration/freeing old zspage/allocating new zspage is
not a cheap, either.
If the system has no problem with small fragmented space, there is
no point to keep such overheads.
So, ideal is we should trigger compaction once we realized system
is trouble but I don't have any good idea to detect it.
That's why i wanted to rely on the decision from user via
compact_threshold_ratio.
>
> > It's simple design of mm/compaction.c to prevent pointless overhead
> > but historically it made pains several times and required more
> > complicated logics but it's still painful.
> >
> > Other thing I found recently is that it's not always win zsmalloc
> > for zram is not fragmented. The fragmented space could be used
> > for storing upcoming compressed objects although it is wasted space
> > at the moment but if we don't have any hole(ie, fragment space)
> > via frequent compaction, zsmalloc should allocate a new zspage
> > which could be allocated on movable pageblock by fallback of
> > nonmovable pageblock request on highly memory pressure system
> > so it accelerates fragment problem of the system memory.
>
> yes, but compaction almost always leave classes fragmented. I think
> it's a corner case, when the number of unused allocated objects was
> exactly the same as the number of objects that we migrated and the
> number of migrated objects was exactly N*maxobj_per_zspage, so we
> left the class w/o any unused objects (OBJ_ALLOCATED == OBJ_USED).
> classes have 'holes' after compaction.
>
>
> > So, I want to pass the policy to userspace.
> > If we found it's really trobule on userspace, then, we need more
> > thinking.
>
> well, it can be under config "aggressive compaction" or "automatic
> compaction" option.
>
If you really want to do it automatically without any feedback
form the userspace, we should find better algorithm.
--
Kind regards,
Minchan Kim
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists