[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <Zr-XVn1ExJ7_LSLS@casper.infradead.org>
Date: Fri, 16 Aug 2024 19:15:50 +0100
From: Matthew Wilcox <willy@...radead.org>
To: Nanyong Sun <sunnanyong@...wei.com>
Cc: hughd@...gle.com, akpm@...ux-foundation.org, david@...hat.com,
ryan.roberts@....com, baohua@...nel.org,
baolin.wang@...ux.alibaba.com, ioworker0@...il.com,
peterx@...hat.com, ziy@...dia.com, wangkefeng.wang@...wei.com,
linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH] mm: control mthp per process/cgroup
On Fri, Aug 16, 2024 at 05:13:27PM +0800, Nanyong Sun wrote:
> Now the large folio control interfaces is system wide and tend to be
> default on: file systems use large folio by default if supported,
> mTHP is tend to default enable when boot [1].
> When large folio enabled, some workloads have performance benefit,
> but some may not and some side effects can happen: the memory usage
> may increase, direct reclaim maybe more frequently because of more
> large order allocations, result in cpu usage also increases. We observed
> this on a product environment which run nginx, the pgscan_direct count
> increased a lot than before, can reach to 3000 times per second, and
> disable file large folio can fix this.
Can you share any details of your nginx workload that shows a regression?
The heuristics for allocating large folios are completely untuned, so
having data for a workload which performs better with small folios is
very valuable.
Powered by blists - more mailing lists