[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <owsicyzohs54ozkwolv55mf65j4e647azcipy7qi3ydvuha6fu@uevqtyfwsifm>
Date: Tue, 6 Jan 2026 09:45:30 -0800
From: Shakeel Butt <shakeel.butt@...ux.dev>
To: Jiayuan Chen <jiayuan.chen@...ux.dev>
Cc: linux-mm@...ck.org, Jiayuan Chen <jiayuan.chen@...pee.com>,
Andrew Morton <akpm@...ux-foundation.org>, Johannes Weiner <hannes@...xchg.org>,
David Hildenbrand <david@...nel.org>, Michal Hocko <mhocko@...nel.org>,
Qi Zheng <zhengqi.arch@...edance.com>, Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
Axel Rasmussen <axelrasmussen@...gle.com>, Yuanchu Xie <yuanchu@...gle.com>, Wei Xu <weixugc@...gle.com>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v1] mm/vmscan: mitigate spurious kswapd_failures reset
from direct reclaim
On Tue, Jan 06, 2026 at 05:25:42AM +0000, Jiayuan Chen wrote:
> January 5, 2026 at 12:51, "Shakeel Butt" <shakeel.butt@...ux.dev mailto:shakeel.butt@...ux.dev?to=%22Shakeel%20Butt%22%20%3Cshakeel.butt%40linux.dev%3E > wrote:
>
>
>
> > I think the simplest solution for you is to enable swap to have more
> > reclaimable memory on the system. Hopefully you will have workingset of
> > the workloads fully in memory on each node.
> >
> > You can try to change application/workload to be more numa aware and
> > balance their anon memory on the given nodes but I think that would much
> > more involved and error prone.
>
> Enabling swap is one solution, but due to historical reasons we haven't
> enabled it - our disk performance is relatively poor. zram is also an
> option, but the migration would take significant time.
Beside zram, You can try zswap with memory.zswap.writeback=0 to avoid
disk for swap. I would suggest to try swap (zswap or swap on zram) on
couple of impacted machines to see if the issue you are seeing is
resolved.
Powered by blists - more mailing lists