lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200331151103.GB2089@cmpxchg.org>
Date:   Tue, 31 Mar 2020 11:11:03 -0400
From:   Johannes Weiner <hannes@...xchg.org>
To:     Yafang Shao <laoar.shao@...il.com>
Cc:     Peter Zijlstra <peterz@...radead.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Michal Hocko <mhocko@...nel.org>, Jens Axboe <axboe@...nel.dk>,
        mgorman@...e.de, Steven Rostedt <rostedt@...dmis.org>,
        mingo@...hat.com, Linux MM <linux-mm@...ck.org>,
        linux-block@...r.kernel.org, LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 0/2] psi: enhance psi with the help of ebpf

On Fri, Mar 27, 2020 at 09:17:59AM +0800, Yafang Shao wrote:
> On Thu, Mar 26, 2020 at 10:31 PM Johannes Weiner <hannes@...xchg.org> wrote:
> >
> > On Thu, Mar 26, 2020 at 07:12:05AM -0400, Yafang Shao wrote:
> > > PSI gives us a powerful way to anaylze memory pressure issue, but we can
> > > make it more powerful with the help of tracepoint, kprobe, ebpf and etc.
> > > Especially with ebpf we can flexiblely get more details of the memory
> > > pressure.
> > >
> > > In orderc to achieve this goal, a new parameter is added into
> > > psi_memstall_{enter, leave}, which indicates the specific type of a
> > > memstall. There're totally ten memstalls by now,
> > >         MEMSTALL_KSWAPD
> > >         MEMSTALL_RECLAIM_DIRECT
> > >         MEMSTALL_RECLAIM_MEMCG
> > >         MEMSTALL_RECLAIM_HIGH
> > >         MEMSTALL_KCOMPACTD
> > >         MEMSTALL_COMPACT
> > >         MEMSTALL_WORKINGSET_REFAULT
> > >         MEMSTALL_WORKINGSET_THRASHING
> > >         MEMSTALL_MEMDELAY
> > >         MEMSTALL_SWAPIO
> >
> > What does this provide over the events tracked in /proc/vmstats?
> >
> 
> /proc/vmstat only tells us which events occured, but it can't tell us
> how long these events take.
> Sometimes we really want to know how long the event takes and PSI can
> provide us the data
> For example, in the past days when I did performance tuning for a
> database service, I monitored that the latency spike is related with
> the workingset_refault counter in /proc/vmstat, and at that time I
> really want to know the spread of latencies caused by
> workingset_refault, but there's no easy way to get it. Now with newly
> added MEMSTALL_WORKINGSET_REFAULT, I can get the latencies caused by
> workingset refault.

Okay, but how do you use that information in practice?

> > Can you elaborate a bit how you are using this information? It's not
> > quite clear to me from the example in patch #2.
> >
> 
> From the traced data in patch #2, we can find that the high latencies
> of user tasks are always type 7 of memstall , which is
> MEMSTALL_WORKINGSET_THRASHING,  and then we should look into the
> details of wokingset of the user tasks and think about how to improve
> it - for example, by reducing the workingset.

That's an analyses we run frequently as well: we see high pressure,
and then correlate it with the events.

High rate of refaults? The workingset is too big.

High rate of compaction work? Somebody is asking for higher order
pages under load; check THP events next.

etc.

This works fairly reliably. I'm curious what the extra per-event
latency breakdown would add and where it would be helpful.

I'm not really opposed to your patches it if it is, I just don't see
the usecase right now.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ