[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190809173108.GA21089@cmpxchg.org>
Date: Fri, 9 Aug 2019 13:31:08 -0400
From: Johannes Weiner <hannes@...xchg.org>
To: Vlastimil Babka <vbabka@...e.cz>
Cc: Michal Hocko <mhocko@...nel.org>,
Suren Baghdasaryan <surenb@...gle.com>,
"Artem S. Tashkinov" <aros@....com>,
Andrew Morton <akpm@...ux-foundation.org>,
LKML <linux-kernel@...r.kernel.org>,
linux-mm <linux-mm@...ck.org>
Subject: Re: Let's talk about the elephant in the room - the Linux kernel's
inability to gracefully handle low memory pressure
On Fri, Aug 09, 2019 at 04:56:28PM +0200, Vlastimil Babka wrote:
> On 8/8/19 7:27 PM, Johannes Weiner wrote:
> > On Thu, Aug 08, 2019 at 04:47:18PM +0200, Vlastimil Babka wrote:
> >> On 8/7/19 10:51 PM, Johannes Weiner wrote:
> >>> From 9efda85451062dea4ea287a886e515efefeb1545 Mon Sep 17 00:00:00 2001
> >>> From: Johannes Weiner <hannes@...xchg.org>
> >>> Date: Mon, 5 Aug 2019 13:15:16 -0400
> >>> Subject: [PATCH] psi: trigger the OOM killer on severe thrashing
> >>
> >> Thanks a lot, perhaps finally we are going to eat the elephant ;)
> >>
> >> I've tested this by booting with mem=8G and activating browser tabs as
> >> long as I could. Then initially the system started thrashing and didn't
> >> recover for minutes. Then I realized sysrq+f is disabled... Fixed that
> >> up after next reboot, tried lower thresholds, also started monitoring
> >> /proc/pressure/memory, and found out that after minutes of not being
> >> able to move the cursor, both avg10 and avg60 shows only around 15 for
> >> both some and full. Lowered thrashing_oom_level to 10 and (with
> >> thrashing_oom_period of 5) the thrashing OOM finally started kicking,
> >> and the system recovered by itself in reasonable time.
> >
> > It sounds like there is a missing annotation. The time has to be going
> > somewhere, after all. One *known* missing vector I fixed recently is
> > stalls in submit_bio() itself when refaulting, but it's not merged
> > yet. Attaching the patch below, can you please test it?
>
> It made a difference, but not enough, it seems. Before the patch I could
> observe "io:full avg10" around 75% and "memory:full avg10" around 20%,
> after the patch, "memory:full avg10" went to around 45%, while io stayed
> the same (BTW should the refaults be discounted from the io counters, so
> that the sum is still <=100%?)
>
> As a result I could change the knobs to recover successfully with
> thrashing detected for 10s of 40% memory pressure.
>
> Perhaps being low on memory we can't detect refaults so well due to
> limited number of shadow entries, or there was genuine non-refault I/O
> in the mix. The detection would then probably have to look at both I/O
> and memory?
Thanks for testing it. It's possible that there is legitimate
non-refault IO, and there can be interaction of course between that
and the refault IO. But to be sure that all genuine refaults are
captured, can you record the workingset_* values from /proc/vmstat
before/after the thrash storm? In particular, workingset_nodereclaim
would indicate whether we are losing refault information.
[ The different resource pressures are not meant to be summed
up. Refaults truly are both IO events and memory events: they
indicate memory contention, but they also contribute to the IO
load. So both metrics need to include them, or it would skew the
picture when you only look at one of them. ]
Powered by blists - more mailing lists