[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20231004091853.9be5aa562f65e0305e06b14c@linux-foundation.org>
Date: Wed, 4 Oct 2023 09:18:53 -0700
From: Andrew Morton <akpm@...ux-foundation.org>
To: Liu Shixin <liushixin2@...wei.com>
Cc: Yosry Ahmed <yosryahmed@...gle.com>,
Huang Ying <ying.huang@...el.com>,
Sachin Sant <sachinp@...ux.ibm.com>,
Michal Hocko <mhocko@...e.com>,
Johannes Weiner <hannes@...xchg.org>,
Kefeng Wang <wangkefeng.wang@...wei.com>, <linux-mm@...ck.org>,
<linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v6] mm: vmscan: try to reclaim swapcache pages if no
swap space
On Fri, 15 Sep 2023 16:34:17 +0800 Liu Shixin <liushixin2@...wei.com> wrote:
> When spaces of swap devices are exhausted, only file pages can be
> reclaimed. But there are still some swapcache pages in anon lru list.
> This can lead to a premature out-of-memory.
>
> The problem is found with such step:
>
> Firstly, set a 9MB disk swap space, then create a cgroup with 10MB
> memory limit, then runs an program to allocates about 15MB memory.
>
> The problem occurs occasionally, which may need about 100 times [1].
>
> Fix it by checking number of swapcache pages in can_reclaim_anon_pages().
> If the number is not zero, return true and set swapcache_only to 1.
> When scan anon lru list in swapcache_only mode, non-swapcache pages will
> be skipped to isolate in order to accelerate reclaim efficiency.
>
> However, in swapcache_only mode, the scan count still increased when scan
> non-swapcache pages because there are large number of non-swapcache pages
> and rare swapcache pages in swapcache_only mode, and if the non-swapcache
> is skipped and do not count, the scan of pages in isolate_lru_folios() can
> eventually lead to hung task, just as Sachin reported [2].
>
> By the way, since there are enough times of memory reclaim before OOM, it
> is not need to isolate too much swapcache pages in one times.
>
mhocko earlier suspected this might impact global reclaim. Have you
looked into that further?
Powered by blists - more mailing lists