[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <DD679B3A-BDF7-4EBD-AAC2-A663057AC8E3@fb.com>
Date: Thu, 11 Aug 2022 00:00:52 +0000
From: "Alex Zhu (Kernel)" <alexlzhu@...com>
To: Yu Zhao <yuzhao@...gle.com>
CC: Yang Shi <shy828301@...il.com>, Rik van Riel <riel@...com>,
Kernel Team <Kernel-team@...com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"willy@...radead.org" <willy@...radead.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
Ning Zhang <ningzhang@...ux.alibaba.com>,
Miaohe Lin <linmiaohe@...wei.com>
Subject: Re: [PATCH v3] mm: add thp_utilization metrics to
/proc/thp_utilization
> Which series are you talking about? I listed two series and they are
> very different on the code level.
>
I was referring to the second patch: https://lore.kernel.org/all/1635422215-99394-1-git-send-email-ningzhang@linux.alibaba.com/.
This patchset adds the THP shrinking as part of shrink_lruvec in mm/vmscan.c. We create a new shrinker that shrinks THPs based off the results
of the scanning implemented in this thp_utilization patch. We also do not have any of the additional knobs for controlling THP reclaim that the patchset above has. That seems unnecessary in the initial patch as shrinking THPs that are almost entirely zero pages should only improve performance.
I believe the resulting implementation we have is simpler and easier to understand than the above patchset. By identifying and freeing underutilized THPs we hope to eventually deprecate madvise entirely and have THP always enabled.
> The 2nd patch from the first series does exactly this.
>
>> but it’s worth discussing whether to free zero pages immediately or to add to lruvec to free eventually.
>
> And that patch can be omitted if the third link (a single patch, not a
> series) is used, which makes the workflow "add to lruvec to free
> eventually".
>
>> I believe the split_huge_page() changes could be valuable by as a patch by itself though. Will send that out shortly.
Referring to this patch: https://lore.kernel.org/r/20210731063938.1391602-1-yuzhao@google.com/.
We do indeed do something similar to patches 1 and 3. We may be able to make use of this instead, I’ll take a closer look.
Powered by blists - more mailing lists