[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZtB5Vn69L27oodEq@tiehlicka>
Date: Thu, 29 Aug 2024 15:36:22 +0200
From: Michal Hocko <mhocko@...e.com>
To: Zhongkun He <hezhongkun.hzk@...edance.com>
Cc: akpm@...ux-foundation.org, hannes@...xchg.org, roman.gushchin@...ux.dev,
shakeel.butt@...ux.dev, muchun.song@...ux.dev,
lizefan.x@...edance.com, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, cgroups@...r.kernel.org
Subject: Re: [External] Re: [RFC PATCH 0/2] Add disable_unmap_file arg to
memory.reclaim
On Thu 29-08-24 21:15:50, Zhongkun He wrote:
> On Thu, Aug 29, 2024 at 7:51 PM Michal Hocko <mhocko@...e.com> wrote:
[...]
> > Is this some artificial workload or something real world?
> >
>
> This is an artificial workload to show the detail of this case more
> easily. But we have encountered this problem on our servers.
This is always good to mention in the changelog. If you can observe this
in real workloads it is good to get numbers from those because
artificial workloads tend to overshoot the underlying problem and we can
potentially miss the practical contributors to the problem.
Seeing this my main question is whether we should focus on swappiness
behavior more than adding a very strange and very targetted reclaim
mode. After all we have a mapped memory and executables protection in
place. So in the end this is more about balance between anon vs. file
LRUs.
> If the performance of the disk is poor, like HDD, the situation will
> become even worse.
Doesn't that impact swapin/out as well? Or do you happen to have a
faster storage for the swap?
> The delay of the task becomes more serious because reading data will
> be slower. Hot pages will thrash repeatedly between the memory and
> the disk.
Doesn't refault stats and IO cost aspect of the reclaim when balancing
LRUs dealing with this situation already? Why it doesn't work in your
case? Have you tried to investigate that?
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists