[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20240613153924.GA3168233@cmpxchg.org>
Date: Thu, 13 Jun 2024 11:39:24 -0400
From: Johannes Weiner <hannes@...xchg.org>
To: Yu Zhao <yuzhao@...gle.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Vlastimil Babka <vbabka@...e.cz>,
Mel Gorman <mgorman@...hsingularity.net>, Zi Yan <ziy@...dia.com>,
"Huang, Ying" <ying.huang@...el.com>,
David Hildenbrand <david@...hat.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, Kalesh Singh <kaleshsingh@...gle.com>,
Chun-Tse Shao <ctshao@...gle.com>
Subject: Re: [PATCH V4 00/10] mm: page_alloc: freelist migratetype hygiene
On Wed, Jun 12, 2024 at 12:52:20PM -0600, Yu Zhao wrote:
> On Mon, Jun 10, 2024 at 9:28 AM Johannes Weiner <hannes@...xchg.org> wrote:
> >
> > On Tue, Jun 04, 2024 at 10:53:55PM -0600, Yu Zhao wrote:
> > > On Mon, May 13, 2024 at 1:04 PM Johannes Weiner <hannes@...xchg.org> wrote:
> > > > On Mon, May 13, 2024 at 12:10:04PM -0600, Yu Zhao wrote:
> > > > > On Mon, May 13, 2024 at 10:03 AM Johannes Weiner <hannes@...xchg.org> wrote:
> > > > > > On Fri, May 10, 2024 at 11:14:43PM -0600, Yu Zhao wrote:
> > > > > > > This series significantly regresses Android and ChromeOS under memory
> > > > > > > pressure. THPs are virtually nonexistent on client devices, and IIRC,
> > > > > > > it was mentioned in the early discussions that potential regressions
> > > > > > > for such a case are somewhat expected?
> > > > > >
> > > > > > This is not expected for the 10 patches here. You might be referring
> > > > > > to the discussion around the huge page allocator series, which had
> > > > > > fallback restrictions and many changes to reclaim and compaction.
> > > > >
> > > > > Right, now I remember.
> > > > >
> > > > > > > On Android (ARMv8.2), app launch time regressed by about 7%; On
> > > > > > > ChromeOS (Intel ADL), tab switch time regressed by about 8%. Also PSI
> > > > > > > (full and some) on both platforms increased by over 20%. I could post
> > > > > > > the details of the benchmarks and the metrics they measure, but I
> > > > > > > doubt they would mean much to you. I did ask our test teams to save
> > > > > > > extra kernel logs that might be more helpful, and I could forward them
> > > > > > > to you.
> > > > > >
> > > > > > If the issue persists with the latest patches in -mm, a kernel config
> > > > > > and snapshots of /proc/vmstat, /proc/pagetypeinfo, /proc/zoneinfo
> > > > > > before/during/after the problematic behavior would be very helpful.
> > > > >
> > > > > Assuming all the fixes were included, do you want the logs from 6.8?
> > > > > We have them available now.
> > > >
> > > > Yes, that would be helpful.
> > > >
> > > > If you have them, it would also be quite useful to have the vmstat
> > > > before-after-test delta from a good kernel, for baseline comparison.
> > >
> > > Sorry for taking this long -- I wanted to see if the regression is
> > > still reproducible on v6.9.
> > >
> > > Apparently we got the similar results on v6.9 with the following
> > > patches cherry-picked cleanly from v6.10-rc1:
> > >
> > > 1 mm: page_alloc: remove pcppage migratetype caching
> > > 2 mm: page_alloc: optimize free_unref_folios()
> > > 3 mm: page_alloc: fix up block types when merging compatible blocks
> > > 4 mm: page_alloc: move free pages when converting block during isolation
> > > 5 mm: page_alloc: fix move_freepages_block() range error
> > > 6 mm: page_alloc: fix freelist movement during block conversion
> > > 7 mm: page_alloc: close migratetype race between freeing and stealing
> > > 8 mm: page_alloc: set migratetype inside move_freepages()
> > > 9 mm: page_isolation: prepare for hygienic freelists
> > > 10 mm: page_alloc: consolidate free page accounting
> > > 11 mm: page_alloc: change move_freepages() to __move_freepages_block()
> > > 12 mm: page_alloc: batch vmstat updates in expand()
> > >
> > > Unfortunately I just realized that that automated benchmark didn't
> > > collect the kernel stats before it starts (since it always starts on a
> > > freshly booted device). While this is being fixed, I'm attaching the
> > > kernel stats collected after the benchmark finished. I grabbed 10 runs
> > > for each (baseline/patched), and if you need more, please let me know.
> > > (And we should have the stats before the benchmark soon.)
> >
> > Thanks for grabbing these, and sorry about the delay, I was traveling
> > last week.
> >
> > You mentioned "THPs are virtually non-existant". But the workload
> > doesn't seem to allocate anon THPs at all.
>
> Sorry for not being clear there: you are correct.
>
> I meant that client devices rarely use 2MB THPs or __GFP_COMP. (They
> simply can't due to both internal and external fragmentations, but we
> are trying!)
Ah, understood. So this is nominally a non-THP workload, and we're
suspecting a simple 4k allocation issue in low memory conditions.
Thanks for clarifying.
However, I don't think 4k alone would explain pressure just yet. PSI
is triggered by reclaim and compaction, but with this series type
fallbacks are still allowed to the full extent before entering any
such remediation. The series merely fixes type safety and eliminates
avoidable/accidental mixing.
So I'm thinking something else must still be going on. Either THP
(however limited the use in this workload); or the userspace feedback
mechanism you mention below...
> > For file THP, the patched
> > kernel's median for allocation success is 90% of baseline, but the
> > inter-run min/max deviation from the median in baseline is 85%/108%
> > and in patched and 85%/112% in patched, so this is quite noisy. Was
> > that initial comment regarding a different workload?
>
> No, in both cases (Android and ChromeOS) we tried, we were hoping the
> series could help with mTHP (64KB and 32KB). But we hit the
> regressions with their baseline (4KB). Again, 2MB THPs, if they are
> used, are reserved (allocated and mlocked to hold text/code sections
> after a reboot). So they shouldn't matter, and I highly doubt the
> regressions are because of them.
Ok.
> > This other data point has me stumped. Comparing medians, there is a
> > 1.5% reduction in anon refaults and a 4.8% increase in file
> > refaults. And indeed, there is more files and less anon being scanned.
> > I think this could explain the PSI delta, since AFAIK you have zram or
> > zswap, and anon decompression loads are cheaper than filesystem IO.
> >
> > The above patches don't do anything that directly influences the
> > anon-file reclaim balance. However, if file THPs fall back to 4k file
> > pages more, that *might* be able to explain a shift in reclaim
> > balance, if some hot subpages in those THPs were protecting colder
> > subpages from being reclaimed and refaulting.
> >
> > In that case, the root cause would still be a simple THP success rate
> > regression. To confirm this theory, could you run the baseline and the
> > patched sets both with THP disabled entirely?
>
> Will try this. And is bisecting within this series possible?
Yes. I built and put each commit incrementally through my test
machinery before sending them out. I can't vouch for all
configurations, of course, but I'd expect it to work.
> > Can you elaborate more on what the workload is doing exactly?
>
> These are simple benchmarks that measure the system and foreground
> app/tab performance under memory pressure, e.g., [1]. They open a
> bunch of apps/tabs (respectively on Android/ChromeOS) and switch
> between them. At a given time, one of them is foreground and the rest
> are background, obviously. When an app/tab has been in the background
> for a while, the userspace may call madvise(PAGEOUT) to reclaim (most
> of) its LRU pages, leaving unmovable kernel memory there. This
> strategy allows client systems to cache more apps/tabs in the
> background and reduce their startup/switch time. But it's also a major
> source of fragmentation (I'm sure you get why so I won't go into
> details here. And userspace also tries to make a better decision
> between reclaim/compact/kill based on fragmentation, but it's not
> easy.)
Thanks for the detailed explanation.
That last bit is interesting: how does it determine "fragmentation"?
The series might well affect this metric.
> [1] https://chromium.googlesource.com/chromiumos/platform/tast-tests/+/refs/heads/main/src/go.chromium.org/tast-tests/cros/local/bundles/cros/platform/memory_pressure.go
>
> > What are
> > the parameters of the test machine (CPUs, memory size)? It'd be
> > helpful if I could reproduce this locally as well.
>
> The data I shared previously is from an Intel i7-1255U + 4GB Chromebook.
>
> More data attached -- it contains vmstat, zoneinfo and pagetypeinfo
> files collected before the benchmark (after fresh reboots) and after
> the benchmark.
Thanks, I'll take a look.
Powered by blists - more mailing lists