[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200227074748.GA18113@js1304-desktop>
Date: Thu, 27 Feb 2020 16:48:47 +0900
From: Joonsoo Kim <js1304@...il.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Johannes Weiner <hannes@...xchg.org>,
Michal Hocko <mhocko@...nel.org>,
Hugh Dickins <hughd@...gle.com>,
Minchan Kim <minchan@...nel.org>,
Vlastimil Babka <vbabka@...e.cz>,
Mel Gorman <mgorman@...hsingularity.net>, kernel-team@....com
Subject: Re: [PATCH v2 0/9] workingset protection/detection on the anonymous
LRU list
Hello, Andrew.
On Wed, Feb 26, 2020 at 07:39:42PM -0800, Andrew Morton wrote:
> On Thu, 20 Feb 2020 14:11:44 +0900 js1304@...il.com wrote:
>
> > From: Joonsoo Kim <iamjoonsoo.kim@....com>
> >
> > Hello,
> >
> > This patchset implements workingset protection and detection on
> > the anonymous LRU list.
>
> The test robot measurement got my attention!
>
> http://lkml.kernel.org/r/20200227022905.GH6548@shao2-debian
I really hope to get an attention!!!
Thanks, test robot and Andrew.
>
> > * Changes on v2
> > - fix a critical bug that uses out of index lru list in
> > workingset_refault()
> > - fix a bug that reuses the rotate value for previous page
> >
> > * SUBJECT
> > workingset protection
> >
> > * PROBLEM
> > In current implementation, newly created or swap-in anonymous page is
> > started on the active list. Growing the active list results in rebalancing
> > active/inactive list so old pages on the active list are demoted to the
> > inactive list. Hence, hot page on the active list isn't protected at all.
> >
> > Following is an example of this situation.
> >
> > Assume that 50 hot pages on active list and system can contain total
> > 100 pages. Numbers denote the number of pages on active/inactive
> > list (active | inactive). (h) stands for hot pages and (uo) stands for
> > used-once pages.
> >
> > 1. 50 hot pages on active list
> > 50(h) | 0
> >
> > 2. workload: 50 newly created (used-once) pages
> > 50(uo) | 50(h)
> >
> > 3. workload: another 50 newly created (used-once) pages
> > 50(uo) | 50(uo), swap-out 50(h)
> >
> > As we can see, hot pages are swapped-out and it would cause swap-in later.
> >
> > * SOLUTION
> > Since this is what we want to avoid, this patchset implements workingset
> > protection. Like as the file LRU list, newly created or swap-in anonymous
> > page is started on the inactive list. Also, like as the file LRU list,
> > if enough reference happens, the page will be promoted. This simple
> > modification changes the above example as following.
>
> One wonders why on earth we weren't doing these things in the first
> place?
I don't know. I tried to find the origin of this behaviour and found
that it's from you 18 years ago. :)
It mentions that starting pages on the active list boosts throughput on
stupid swapstormy test but I cannot guess the exact reason of such
improvement.
Anyway, Following is the related patch history. Could you remember
anything about it?
commit 018c71d821e7cfb13470e43778645c899c30c53e
Author: Andrew Morton <akpm@...eo.com>
Date: Thu Oct 31 04:09:19 2002 -0800
[PATCH] start anon pages on the active list (properly this time)
Use lru_cache_add_active() so ensure that pages which are, or will be
mapped into pagetables are started out on the active list.
commit 1527d0b71fa1e9db1beb22fda689b9086d025455
Author: Andrew Morton <akpm@...eo.com>
Date: Thu Oct 31 04:09:13 2002 -0800
[PATCH] lru_add_active(): for starting pages on the active list
This is the first in a series of patches which tune up the 2.5
performance under heavy swap loads.
Throughput on stupid swapstormy tests is increased by 1.5x to 3x.
Still about 20% behind 2.4 with multithreaded tests. That is not
easily fixable - the virtual scan tends to apply a form of load
control: particular processes are heavily swapped out so the others can
get ahead. With 2.5 all processes make very even progress and much
more swapping is needed. It's on par with 2.4 for single-process
swapstorms.
In this patch:
The code which tries to start mapped pages out on the active list
doesn't work very well. It uses an "is it mapped into pagetables"
test. Which doesn't work for, say, swap readahead pages. They are not
mapped into pagetables when they are spilled onto the LRU.
So create a new `lru_cache_add_active()' function for deferred addition
of pages to their active list.
Also move mark_page_accessed() from filemap.c to swap.c where all
similar functions live. And teach it to not try to move pages which
are in the deferred-addition list onto the active list. That won't
work, and it's bogusly clearing PageReferenced in that case.
The deferred-addition lists are a pest. But lru_cache_add used to be
really expensive in sime workloads on some machines. Must persist.
> > * SUBJECT
> > workingset detection
>
> It sounds like the above simple aging changes provide most of the
> improvement, and that the workingset changes are less beneficial and a
> bit more risky/speculative?
I don't think so.
Although test robot just find the improvement of simple ratio changes,
later patches also have their's own benefit. I found the benefit of
the other patches on our production workload although it isn't
mentioned in cover-letter.
And, what this patchset does looks the reasonable thing.
> If so, would it be best for us to concentrate on the aging changes
> first, let that settle in and spread out and then turn attention to the
> workingset changes?
I hope that more developer pay an attention on this patchset and
the patchset are merged together.
Thanks.
Powered by blists - more mailing lists