lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAAmzW4NDVznjOsW1Vgg1P+0vSQarE1ziY=MN5S5f70pQiOPn-Q@mail.gmail.com>
Date:   Tue, 2 Jun 2020 11:34:17 +0900
From:   Joonsoo Kim <js1304@...il.com>
To:     Johannes Weiner <hannes@...xchg.org>
Cc:     Linux Memory Management List <linux-mm@...ck.org>,
        Rik van Riel <riel@...riel.com>,
        Minchan Kim <minchan.kim@...il.com>,
        Michal Hocko <mhocko@...e.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Joonsoo Kim <iamjoonsoo.kim@....com>,
        LKML <linux-kernel@...r.kernel.org>, kernel-team@...com
Subject: Re: [PATCH 05/14] mm: workingset: let cache workingset challenge anon

2020년 6월 2일 (화) 오전 12:56, Johannes Weiner <hannes@...xchg.org>님이 작성:
>
> On Mon, Jun 01, 2020 at 03:14:24PM +0900, Joonsoo Kim wrote:
> > 2020년 5월 30일 (토) 오전 12:12, Johannes Weiner <hannes@...xchg.org>님이 작성:
> > >
> > > On Fri, May 29, 2020 at 03:48:00PM +0900, Joonsoo Kim wrote:
> > > > 2020년 5월 29일 (금) 오전 2:02, Johannes Weiner <hannes@...xchg.org>님이 작성:
> > > > > On Thu, May 28, 2020 at 04:16:50PM +0900, Joonsoo Kim wrote:
> > > > > > 2020년 5월 27일 (수) 오후 10:43, Johannes Weiner <hannes@...xchg.org>님이 작성:
> > > > > > > On Wed, May 27, 2020 at 11:06:47AM +0900, Joonsoo Kim wrote:
> > > > > The only way they could get reclaimed is if their access distance ends
> > > > > up bigger than the file cache. But if that's the case, then the
> > > > > workingset is overcommitted, and none of the pages qualify for reclaim
> > > > > protection. Picking a subset to protect against the rest is arbitrary.
> > > >
> > > > In the fixed example, although other file (500 MB) is repeatedly accessed,
> > > > it's not workingset. If we have unified list (file + anon), access distance of
> > > > Pn will be larger than whole memory size. Therefore, it's not overcommitted
> > > > workingset and this patch wrongly try to activate it. As I said before,
> > > > without considering inactive_age for anon list, this calculation can not be
> > > > correct.
> > >
> > > You're right. If we don't take anon age into account, the activations
> > > could be over-eager; however, so would counting IO cost and exerting
> > > pressure on anon be, which means my previous patch to split these two
> > > wouldn't fix fundamental the problem you're pointing out. We simply
> >
> > Splitting would not fix the fundamental problem (over-eager) but it would
> > greatly weaken the problem. Just counting IO cost doesn't break the
> > active/inactive separation in file list. It does cause more scan on anon list
> > but I think that it's endurable.
>
> I think the split is a good idea.
>
> The only thing I'm not sure yet is if we can get away without an
> additional page flag if the active flag cannot be reused to denote
> thrashing. I'll keep at it, maybe I can figure something out.
>
> But I think it would be follow-up work.
>
> > > have to take anon age into account for the refaults to be comparable.
> >
> > Yes, taking anon age into account is also a good candidate to fix the problem.
>
> Okay, good.
>
> > > However, your example cannot have a completely silent stable state. As
> > > we stop workingset aging, the refault distances will slowly increase
> > > again. We will always have a bit of churn, and rightfully so, because
> > > the workingset *could* go stale.
> > >
> > > That's the same situation in my cache-only example above. Anytime you
> > > have a subset of pages that by itself could fit into memory, but can't
> > > because of an established workingset, ongoing sampling is necessary.
> > >
> > > But the rate definitely needs to reduce as we detect that in-memory
> > > pages are indeed hot. Otherwise we cause more churn than is required
> > > for an appropriate rate of workingset sampling.
> > >
> > > How about the patch below? It looks correct, but I will have to re-run
> > > my tests to make sure I / we are not missing anything.
> >
> > Much better! It may solve my concern mostly.
>
> Okay thanks for confirming. I'll send a proper version to Andrew.

Okay.

> > But, I still think that modified refault activation equation isn't
> > safe. The next
> > problem I found is related to the scan ratio limit patch ("limit the range of
> > LRU type balancing") on this series. See the below example.
> >
> > anon: Hot (X M)
> > file: Hot (200 M) / dummy (200 M)
> > P: 1200 M (3 parts, each one 400 M, P1, P2, P3)
> > Access Pattern: A -> F(H) -> P1 -> A -> F(H) -> P2 -> ... ->
> >
> > Without this patch, A and F(H) are kept on the memory and look like
> > it's correct.
> >
> > With this patch and below fix, refault equation for Pn would be:
> >
> > Refault dist of Pn = 1200 (from file non-resident) + 1200 * anon scan
> > ratio (from anon non-resident)
> > anon + active file = X + 200
> > 1200 + 1200 * anon scan ratio (0.5 ~ 2) < X + 200
>
> That doesn't look quite right to me. The anon part of the refault
> distance is driven by X, so the left-hand of this formula contains X
> as well.
>
> 1000 file (1200M reuse distance, 200M in-core size) + F(H) reactivations + X * scan ratio < X + 1000

As I said before, there is no X on left-hand of this formula. To
access all Pn and
re-access P1, we need 1200M file list scan and reclaim. More scan isn't needed.
With your patch "limit the range of LRU type balancing", scan ratio
between file/anon
list is limited to 0.5 ~ 2.0, so, maximum anon scan would be 1200 M *
2.0, that is,
2400 M and not bounded by X. That means that file list cannot be
stable with some X.

> Activations persist as long as anon isn't fully scanned and it isn't
> established yet that it's fully hot. Meaning, we optimistically assume
> the refaulting pages can be workingset until we're proven wrong.
>
> > According to the size of X, Pn's refault result would be different. Pn could
> > be activated with large enough X and then F(H) could be evicted. In ideal
> > case (unified list), for this example, Pn should not be activated in any X.
>
> Yes. The active/iocost split would allow us to be smarter about it.
>
> > This is a fundamental problem since we have two list type (file/anon) and
> > scan ratio limit is required. Anyway, we need to take care of this reality and
> > the way most safe is to count IO cost instead of doing activation in this
> > 'non-resident dist < (active + anon list)' case.
>
> Agreed here again.
>
> > Again, for this patch, I'm not confident myself so please let me know if I'm
> > wrong.
>
> As far as this patch goes, I think it's important to look at the
> bigger picture.
>
> We need to have convergence first before being able to worry about
> optimizing. Stable states are optimizations, but false stable states
> are correctness problems.
>
> For the longest time, we scanned active pages unconditionally during
> page reclaim. This was always safe in the sense that it wouldn't get
> stuck on a stale workingset, but it incurs unnecessary workingset
> churn when reclaim is driven by use-once patterns.
>
> We optimized the latter too aggressively, and as a result caused
> situations where we indefinitely fail to cache the hottest
> data. That's not really a workable trade-off.
>
> With the active/iocost split you're suggesting, we can reasonably
> optimize your example scenario. But we can't do it if the flipside
> means complete failure to transition between in-memory sets.
>
> So I think we should go ahead with this patch (with the anon age
> recognition fixed, because that's a correctness issue), and follow it
> up with the stable state optimization to shrink anon first.

If my lastly found example is a correct example (your confirm is required),
it is also related to the correctness issue since cold pages causes
eviction of the hot pages repeatedly.

In this case, they (without patch, with patch) all have some correctness
issue so we need to judge which one is better in terms of overall impact.
I don't have strong opinion about it so it's up to you to decide the way to go.

Thanks.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ