[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9d2e63c4-ebb6-1f14-b8fb-b39f2f67d916@suse.cz>
Date: Wed, 14 Aug 2019 14:55:54 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: Yang Shi <yang.shi@...ux.alibaba.com>,
Michal Hocko <mhocko@...nel.org>
Cc: kirill.shutemov@...ux.intel.com, hannes@...xchg.org,
rientjes@...gle.com, akpm@...ux-foundation.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [RESEND PATCH 1/2 -mm] mm: account lazy free pages separately
On 8/12/19 7:00 PM, Yang Shi wrote:
>> I can see that memcg rss size was the primary problem David was looking
>> at. But MemAvailable will not help with that, right? Moreover is
>
> Yes, but David actually would like to have memcg MemAvailable (the
> accounter like the global one), which should be counted like the global
> one and should account per memcg deferred split THP properly.
>
>> accounting the full THP correct? What if subpages are still mapped?
>
> "Deferred split" definitely doesn't mean they are free. When memory
> pressure is hit, they would be split, then the unmapped normal pages
> would be freed. So, when calculating MemAvailable, they are not
> accounted 100%, but like "available += lazyfree - min(lazyfree / 2,
> wmark_low)", just like how page cache is accounted.
>
> We could get more accurate account, i.e. checking each sub page's
> mapcount when accounting, but it may change before shrinker start
> scanning. So, just use the ballpark estimation to trade off the
> complexity for accurate accounting.
If we know the mapcounts in the moment the deferred split is initiated (I
suppose there has to be a iteration over all subpages already?), we could get
the exact number to adjust the counter with, and also store the number somewhere
(e.g. a unused field in first/second tail page, I think we already do that for
something). Then in the shrinker we just read that number to adjust the counter
back. Then we can ignore the subpage mapping changes before shrinking happens,
they shouldn't change the situation significantly, and importantly we we will be
safe from counter imbalance thanks to the stored number.
Powered by blists - more mailing lists