[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y9Fv9YnNn7bHvLkN@google.com>
Date: Wed, 25 Jan 2023 10:07:49 -0800
From: Minchan Kim <minchan@...nel.org>
To: Michal Hocko <mhocko@...e.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Suren Baghdasaryan <surenb@...gle.com>,
Matthew Wilcox <willy@...radead.org>,
linux-mm <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v2] mm/madvise: add vmstat statistics for
madvise_[cold|pageout]
On Wed, Jan 25, 2023 at 06:07:00PM +0100, Michal Hocko wrote:
> On Wed 25-01-23 08:36:02, Minchan Kim wrote:
> > On Wed, Jan 25, 2023 at 09:04:16AM +0100, Michal Hocko wrote:
> > > On Tue 24-01-23 16:54:57, Minchan Kim wrote:
> > > > madvise LRU manipulation APIs need to scan address ranges to find
> > > > present pages at page table and provides advice hints for them.
> > > >
> > > > Likewise pg[scan/steal] count on vmstat, madvise_pg[scanned/hinted]
> > > > shows the proactive reclaim efficiency so this patch adds those
> > > > two statistics in vmstat.
> > > >
> > > > madvise_pgscanned, madvise_pghinted
> > > >
> > > > Since proactive reclaim using process_madvise(2) as userland
> > > > memory policy is popular(e.g,. Android ActivityManagerService),
> > > > those stats are helpful to know how efficiently the policy works
> > > > well.
> > >
> > > The usecase description is still too vague. What are those values useful
> > > for? Is there anything actionable based on those numbers? How do you
> > > deal with multiple parties using madvise resp. process_madvise so that
> > > their stats are combined?
> >
> > The metric helps monitoing system MM health under fleet and experimental
> > tuning with diffrent policies from the centralized userland memory daemon.
>
> That is just too vague for me to imagine anything more specific then, we
> have numbers and we can show them in a report. What does it actually
> mean that madvise_pgscanned is high. Or that pghinted / pgscanned is
> low (that you tend to manually reclaim sparse mappings)?
If that's low, it means the userspace daemon's current tune/policy are
inefficient or too aggressive since it is working on address spacess
of processes which don't have enough memory the hint can work(e.g.,
shared addresses, cold address ranges or some special address ranges like
VM_PFNMAP) so sometime, we can detect regression to find culprit or
have a chance to look into better ideas to improve.
>
> > That's really good fit under vmstat along with other MM metrics.
> >
> > >
> > > In the previous version I have also pointed out that this might be
> > > easily achieved by tracepoints. Your counterargument was a convenience
> > > in a large scale monitoring without going much into details. Presumably
> > > this is because your fleet based monitoring already collects
> > > /proc/vmstat while tracepoints based monitoring would require additional
> > > changes. This alone is rather weak argument to be honest because
> > > deploying tracepoints monitoring is quite trivial and can be done
> > > outside of the said memory reclaim agent.
> >
> > The convenience matters but that's not my argument.
> >
> > Ithink using tracepoint for system metric makes no sense even though
> > the tracepoint could be extended by using bpf or histogram trigger to
> > get the accumulated counters for system metric.
>
> System wide metrics data collection by ftrace is a common use case. I
> really do not follow your argument here. There are certainly cases where
> ftrace is suboptimal solution - e.g. when the cumulative data couldn't
> have been collected early on for one reason or another (e.g. system
> uptime is already high when you decide to start collecting). But you
> have stated there is data collection happening so what does prevent
> collecting this just along with anything else.
>
> > The tracepoint is the next step if we want to know further breakdown
> > once something strange happens. That's why we have separate level metric
> > system to narrow problem down rather than implementing all the metric
> > with tracepoint. Please, look at vmstat fields. Almost every fields
> > would have same question you asked "how do you break down if multiple
> > processes were invovled to contribute the metric?"
>
> Yes, we tended to be much more willing to add counters. Partly because
> runtime debugging capabilities were not that great in the past as we
> have these days.
>
> > I am fine if you suggest adding tracepoint as well as the vmstat fields
> > for further breakdown but only relying on tracepoint and frineds for
> > system global metric doesn't make sense.
>
> We have to agree to disagree here. I am not going to nack this but I
> disagree with this patch because the justification is just too vague and
> also those numbers cannot really be attributed to anybody performing
> madvise to actually evaluate that activity.
> --
> Michal Hocko
> SUSE Labs
Powered by blists - more mailing lists