[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y1lvJBnVx1Fv5WHz@cmpxchg.org>
Date: Wed, 26 Oct 2022 13:32:20 -0400
From: Johannes Weiner <hannes@...xchg.org>
To: Yang Shi <shy828301@...il.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, Eric Bergen <ebergen@...a.com>
Subject: Re: [PATCH] mm: vmscan: split khugepaged stats from direct reclaim
stats
On Tue, Oct 25, 2022 at 02:53:01PM -0700, Yang Shi wrote:
> On Tue, Oct 25, 2022 at 1:54 PM Johannes Weiner <hannes@...xchg.org> wrote:
> >
> > On Tue, Oct 25, 2022 at 12:40:15PM -0700, Yang Shi wrote:
> > > On Tue, Oct 25, 2022 at 10:05 AM Johannes Weiner <hannes@...xchg.org> wrote:
> > > >
> > > > Direct reclaim stats are useful for identifying a potential source for
> > > > application latency, as well as spotting issues with kswapd. However,
> > > > khugepaged currently distorts the picture: as a kernel thread it
> > > > doesn't impose allocation latencies on userspace, and it explicitly
> > > > opts out of kswapd reclaim. Its activity showing up in the direct
> > > > reclaim stats is misleading. Counting it as kswapd reclaim could also
> > > > cause confusion when trying to understand actual kswapd behavior.
> > > >
> > > > Break out khugepaged from the direct reclaim counters into new
> > > > pgsteal_khugepaged, pgdemote_khugepaged, pgscan_khugepaged counters.
> > > >
> > > > Test with a huge executable (CONFIG_READ_ONLY_THP_FOR_FS):
> > > >
> > > > pgsteal_kswapd 1342185
> > > > pgsteal_direct 0
> > > > pgsteal_khugepaged 3623
> > > > pgscan_kswapd 1345025
> > > > pgscan_direct 0
> > > > pgscan_khugepaged 3623
> > >
> > > There are other kernel threads or works may allocate memory then
> > > trigger memory reclaim, there may be similar problems for them and
> > > someone may try to add a new stat. So how's about we make the stats
> > > more general, for example, call it "pg{steal|scan}_kthread"?
> >
> > I'm not convinved that's a good idea.
> >
> > Can you generally say that userspace isn't indirectly waiting for one
> > of those allocating threads? With khugepaged, we know.
>
> AFAIK, ksm may do slab allocation with __GFP_DIRECT_RECLAIM.
Right, but ksm also uses __GFP_KSWAPD_RECLAIM. So while userspace
isn't directly waiting for ksm, when ksm enters direct reclaim it's
because kswapd failed. This is of interest to kernel developers.
Userspace will likely see direct reclaim in that scenario as well, so
the ksm direct reclaim counts aren't liable to confuse users.
Khugepaged on the other hand will *always* reclaim directly, even if
there is no memory pressure or kswapd failure. The direct reclaim
counts there are misleading to both developers and users.
What it really should be is pgscan_nokswapd_nouserprocesswaiting, but
that just seems kind of long ;-)
I'm also not sure anybody but khugepaged is doing direct reclaim
without kswapd reclaim. It seems unlikely we'll get more of those.
> Some device mapper drivers may do heavy lift in the work queue, for
> example, dm-crypt, particularly for writing.
Userspace will wait for those through dirty throttling. We'd want to
know about kswapd failures in that case - again, without them being
muddied by khugepaged.
> > And those other allocations are usually ___GFP_KSWAPD_RECLAIM, so if
> > they do direct reclaim, we'd probably want to know that kswapd is
> > failing to keep up (doubly so if userspace is waiting). In a shared
> > kthread counter, khugepaged would again muddy the waters.
Powered by blists - more mailing lists