[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <12F2BA64-E7F8-49C6-B062-96DF48DF0192@fb.com>
Date: Mon, 29 Aug 2022 20:49:31 +0000
From: "Alex Zhu (Kernel)" <alexlzhu@...com>
To: Zi Yan <ziy@...dia.com>
CC: "linux-mm@...ck.org" <linux-mm@...ck.org>,
Matthew Wilcox <willy@...radead.org>,
"hannes@...xchg.org" <hannes@...xchg.org>,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
"riel@...riel.com" <riel@...riel.com>,
Kernel Team <Kernel-team@...com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [RFC 3/3] mm: THP low utilization shrinker
> How stale could the information in the utilization bucket be?
The staleness would be capped by the duration of the scan, 70s in in the
example.
> Is it possible that THP shrinker splits a THP used to have a lot of
> zero-filled subpages but now have all subpages filled with useful
> values?
This is possible, but we free only the zero pages, which cannot have
any useful values. How often it happens that THPs move utilization buckets
should be workload dependent.
> In Patch 2, split_huge_page() only unmap zero-filled subpages,
> but for THP shrinker, should it verify the utilization before it
> splits the page?
I think we should add this check to verify that it is still in the lowest bucket before having the
shrinker split the page. The utilization could have changed and this way we do not
need to worry about workloads where THPs move utilization buckets. Thanks!
Powered by blists - more mailing lists