[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160218101909.GB503@swordfish>
Date: Thu, 18 Feb 2016 19:19:09 +0900
From: Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>
To: Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>
Cc: Joonsoo Kim <js1304@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Minchan Kim <minchan@...nel.org>,
Linux Memory Management List <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>,
Sergey Senozhatsky <sergey.senozhatsky@...il.com>
Subject: Re: [RFC PATCH 3/3] mm/zsmalloc: change ZS_MAX_PAGES_PER_ZSPAGE
On (02/18/16 18:55), Sergey Senozhatsky wrote:
> > There is a reason that it is order of 2. Increasing ZS_MAX_PAGES_PER_ZSPAGE
> > is related to ZS_MIN_ALLOC_SIZE. If we don't have enough OBJ_INDEX_BITS,
> > ZS_MIN_ALLOC_SIZE would be increase and it causes regression on some
> > system.
>
> Thanks!
>
> do you mean PHYSMEM_BITS != BITS_PER_LONG systems? PAE/LPAE? isn't it
> the case that on those systems ZS_MIN_ALLOC_SIZE already bigger than 32?
I mean, yes, there are ZS_ALIGN requirements that I completely ignored,
thanks for pointing that out.
just saying, not insisting on anything, theoretically, trading 32 bit size
objects in exchange of reducing a much bigger memory wastage is sort of
interesting. zram stores objects bigger than 3072 as huge objects, leaving
4096-3072 bytes unused, and it'll take 4096-3072/32 = 4000 32 bit objects
to beat that single 'bad' compression object in storing inefficiency...
well, patches 0001/0002 are trying to address this a bit, but the biggest
problem is still there: we have too many ->huge classes and they are a bit
far from good.
-ss
Powered by blists - more mailing lists