[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CANFwon1Kb0iWFDk_5jcxBk5F7NjY6o7aSuvDMTwSt1XshTFyEw@mail.gmail.com>
Date: Mon, 5 Sep 2016 14:02:14 +0800
From: Hui Zhu <teawater@...il.com>
To: Minchan Kim <minchan@...nel.org>
Cc: Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>,
Hui Zhu <zhuhui@...omi.com>, ngupta@...are.org,
Hugh Dickins <hughd@...gle.com>,
Steven Rostedt <rostedt@...dmis.org>,
Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>, acme@...nel.org,
alexander.shishkin@...ux.intel.com,
Andrew Morton <akpm@...ux-foundation.org>, mhocko@...e.com,
hannes@...xchg.org, mgorman@...hsingularity.net, vbabka@...e.cz,
redkoi@...tuozzo.com, luto@...nel.org,
kirill.shutemov@...ux.intel.com, geliangtang@....com,
baiyaowei@...s.chinamobile.com, dan.j.williams@...el.com,
vdavydov@...tuozzo.com, aarcange@...hat.com, dvlasenk@...hat.com,
jmarchan@...hat.com, koct9i@...il.com, yang.shi@...aro.org,
dave.hansen@...ux.intel.com, vkuznets@...hat.com,
vitalywool@...il.com, ross.zwisler@...ux.intel.com,
Thomas Gleixner <tglx@...utronix.de>,
kwapulinski.piotr@...il.com, axboe@...com, mchristi@...hat.com,
Joe Perches <joe@...ches.com>, namit@...are.com,
Rik van Riel <riel@...hat.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Linux Memory Management List <linux-mm@...ck.org>
Subject: Re: [RFC 0/4] ZRAM: make it just store the high compression rate page
On Mon, Sep 5, 2016 at 1:51 PM, Minchan Kim <minchan@...nel.org> wrote:
> On Mon, Sep 05, 2016 at 01:12:05PM +0800, Hui Zhu wrote:
>> On Mon, Sep 5, 2016 at 10:18 AM, Minchan Kim <minchan@...nel.org> wrote:
>> > On Thu, Aug 25, 2016 at 04:25:30PM +0800, Hui Zhu wrote:
>> >> On Thu, Aug 25, 2016 at 2:09 PM, Sergey Senozhatsky
>> >> <sergey.senozhatsky.work@...il.com> wrote:
>> >> > Hello,
>> >> >
>> >> > On (08/22/16 16:25), Hui Zhu wrote:
>> >> >>
>> >> >> Current ZRAM just can store all pages even if the compression rate
>> >> >> of a page is really low. So the compression rate of ZRAM is out of
>> >> >> control when it is running.
>> >> >> In my part, I did some test and record with ZRAM. The compression rate
>> >> >> is about 40%.
>> >> >>
>> >> >> This series of patches make ZRAM can just store the page that the
>> >> >> compressed size is smaller than a value.
>> >> >> With these patches, I set the value to 2048 and did the same test with
>> >> >> before. The compression rate is about 20%. The times of lowmemorykiller
>> >> >> also decreased.
>> >> >
>> >> > I haven't looked at the patches in details yet. can you educate me a bit?
>> >> > is your test stable? why the number of lowmemorykill-s has decreased?
>> >> > ... or am reading "The times of lowmemorykiller also decreased" wrong?
>> >> >
>> >> > suppose you have X pages that result in bad compression size (from zram
>> >> > point of view). zram stores such pages uncompressed, IOW we have no memory
>> >> > savings - swapped out page lands in zsmalloc PAGE_SIZE class. now you
>> >> > don't try to store those pages in zsmalloc, but keep them as unevictable.
>> >> > so the page still occupies PAGE_SIZE; no memory saving again. why did it
>> >> > improve LMK?
>> >>
>> >> No, zram will not save this page uncompressed with these patches. It
>> >> will set it as non-swap and kick back to shrink_page_list.
>> >> Shrink_page_list will remove this page from swapcache and kick it to
>> >> unevictable list.
>> >> Then this page will not be swaped before it get write.
>> >> That is why most of code are around vmscan.c.
>> >
>> > If I understand Sergey's point right, he means there is no gain
>> > to save memory between before and after.
>> >
>> > With your approach, you can prevent unnecessary pageout(i.e.,
>> > uncompressible page swap out) but it doesn't mean you save the
>> > memory compared to old so why does your patch decrease the number of
>> > lowmemory killing?
>> >
>> > A thing I can imagine is without this feature, zram could be full of
>> > uncompressible pages so good-compressible page cannot be swapped out.
>> > Hui, is this scenario right for your case?
>> >
>>
>> That is one reason. But it is not the principal one.
>>
>> Another reason is when swap is running to put page to zram, what the
>> system wants is to get memory.
>> Then the deal is system spends cpu time and memory to get memory. If
>> the zram just access the high compression rate pages, system can get
>> more memory with the same amount of memory. It will pull system from
>> low memory status earlier. (Maybe more cpu time, because the
>> compression rate checks. But maybe less, because fewer pages need to
>> digress. That is the interesting part. :)
>> I think that is why lmk times decrease.
>>
>> And yes, all of this depends on the number of high compression rate
>> pages. So you cannot just set a non_swap limit to the system and get
>> everything. You need to do a lot of test around it to make sure the
>> non_swap limit is good for your system.
>>
>> And I think use AOP_WRITEPAGE_ACTIVATE without kicking page to a
>> special list will make cpu too busy sometimes.
>
> Yes, and it would same with your patch if new arraival write on CoWed
> page is uncompressible data.
>
>> I did some tests before I kick page to a special list. The shrink task
>
> What kinds of test? Could you elaborate a bit more?
> shrink task. What does it mean?
>
Sorry for this part. It should be function shrink_page_list.
I will do more test for that and post the patch later.
Thanks,
Hui
>> will be moved around, around and around because low compression rate
>> pages just moved from one list to another a lot of times, again, again
>> and again.
>> And all this low compression rate pages always stay together.
>
> I cannot understand with detail description. :(
> Could you explain more?
Powered by blists - more mailing lists