lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y9FacrcUIaLZq4DL@google.com>
Date:   Wed, 25 Jan 2023 08:36:02 -0800
From:   Minchan Kim <minchan@...nel.org>
To:     Michal Hocko <mhocko@...e.com>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        Suren Baghdasaryan <surenb@...gle.com>,
        Matthew Wilcox <willy@...radead.org>,
        linux-mm <linux-mm@...ck.org>,
        LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v2] mm/madvise: add vmstat statistics for
 madvise_[cold|pageout]

On Wed, Jan 25, 2023 at 09:04:16AM +0100, Michal Hocko wrote:
> On Tue 24-01-23 16:54:57, Minchan Kim wrote:
> > madvise LRU manipulation APIs need to scan address ranges to find
> > present pages at page table and provides advice hints for them.
> > 
> > Likewise pg[scan/steal] count on vmstat, madvise_pg[scanned/hinted]
> > shows the proactive reclaim efficiency so this patch adds those
> > two statistics in vmstat.
> > 
> > 	madvise_pgscanned, madvise_pghinted
> > 
> > Since proactive reclaim using process_madvise(2) as userland
> > memory policy is popular(e.g,. Android ActivityManagerService),
> > those stats are helpful to know how efficiently the policy works
> > well.
> 
> The usecase description is still too vague. What are those values useful
> for? Is there anything actionable based on those numbers? How do you
> deal with multiple parties using madvise resp. process_madvise so that
> their stats are combined?

The metric helps monitoing system MM health under fleet and experimental
tuning with diffrent policies from the centralized userland memory daemon.
That's really good fit under vmstat along with other MM metrics.

> 
> In the previous version I have also pointed out that this might be
> easily achieved by tracepoints. Your counterargument was a convenience
> in a large scale monitoring without going much into details. Presumably
> this is because your fleet based monitoring already collects
> /proc/vmstat while tracepoints based monitoring would require additional
> changes. This alone is rather weak argument to be honest because
> deploying tracepoints monitoring is quite trivial and can be done
> outside of the said memory reclaim agent.

The convenience matters but that's not my argument. 

Ithink using tracepoint for system metric makes no sense even though
the tracepoint could be extended by using bpf or histogram trigger to
get the accumulated counters for system metric.

The tracepoint is the next step if we want to know further breakdown
once something strange happens. That's why we have separate level metric
system to narrow problem down rather than implementing all the metric
with tracepoint. Please, look at vmstat fields. Almost every fields
would have same question you asked "how do you break down if multiple
processes were invovled to contribute the metric?"

I am fine if you suggest adding tracepoint as well as the vmstat fields
for further breakdown but only relying on tracepoint and frineds for
system global metric doesn't make sense.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ