[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YgvRPJh44VkX1+JV@cmpxchg.org>
Date: Tue, 15 Feb 2022 11:13:48 -0500
From: Johannes Weiner <hannes@...xchg.org>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
kernel-team@...com, CGEL <cgel.zte@...il.com>,
Minchan Kim <minchan@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Yu Zhao <yuzhao@...gle.com>
Subject: Re: [PATCH] mm: page_io: fix psi memory pressure error on cold
swapins
On Mon, Feb 14, 2022 at 02:48:05PM -0800, Andrew Morton wrote:
> On Mon, 14 Feb 2022 16:49:21 -0500 Johannes Weiner <hannes@...xchg.org> wrote:
>
> > Once upon a time, all swapins counted toward memory pressure[1]. Then
> > Joonsoo introduced workingset detection for anonymous pages and we
> > gained the ability to distinguish hot from cold swapins[2][3]. But we
> > failed to update swap_readpage() accordingly, and now we account
> > partial memory pressure in the swapin path of cold memory.
> >
> > Not for all situations - which adds more inconsistency: paths using
> > the conventional submit_bio() and lock_page() route will not see much
> > pressure - unless storage itself is heavily congested and the bio
> > submissions stall. ZRAM and ZSWAP do most of the work directly from
> > swap_readpage() and will see all swapins reflected as pressure.
> >
> > Restore consistency by making all swapin stall accounting conditional
> > on the page actually being part of the workingset.
>
> Does this have any known runtime effects? If not, can we
> hazard a guess?
hm, how about this paragrah between "not for all situations" and
"restore consistency":
IOW, a workload doing cold swapins could see little to no pressure
reported with on-disk swap, but potentially high pressure with a zram
or zswap backend. That confuses any psi-based health monitoring, load
shedding, proactive reclaim, or userspace OOM killing schemes that
might be in place for the workload.
Powered by blists - more mailing lists