[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YmBUkNMVDQobgK4M@cmpxchg.org>
Date: Wed, 20 Apr 2022 14:44:32 -0400
From: Johannes Weiner <hannes@...xchg.org>
To: Nico Pache <npache@...hat.com>
Cc: linux-mm@...ck.org, akpm@...ux-foundation.org,
linux-kernel@...r.kernel.org, aquini@...hat.com,
shakeelb@...gle.com, llong@...hat.com, mhocko@...e.com,
hakavlad@...ox.lv
Subject: Re: [PATCH v3] vm_swappiness=0 should still try to avoid swapping
anon memory
Hi Nico,
On Wed, Apr 20, 2022 at 01:34:58PM -0400, Nico Pache wrote:
> On 4/20/22 10:01, Johannes Weiner wrote:
> >> My swappiness=0 solution was a minimal approach to regaining the 'avoid swapping
> >> ANON' behavior that was previously there, but as Shakeel pointed out, there may
> >> be something larger at play.
> >
> > So with my patch and swappiness=0 you get excessive swapping on v1 but
> > not on v2? And the patch to avoid DEACTIVATE_ANON fixes it?
>
> correct, I haven't tested the DEACTIVATE_ANON patch since last time I was
> working on this, but it did cure it. I can build a new kernel with it and verify
> again.
>
> The larger issue is that our workload has regressed in performance.
>
> With V2 and swappiness=10 we are still seeing some swap, but very little tearing
> down of THPs over time. With swappiness=0 it did some when swap but we are not
> losings GBs of THPS (with your patch swappiness=0 has swap or THP issues on V2).
>
> With V1 and swappiness=(0|10)(with and without your patch), it swaps a ton and
> ultimately leads to a significant amount of THP splitting. So the longer the
> system/workload runs, the less likely we are to get THPs backing the guest and
> the performance gain from THPs is lost.
I hate to ask, but is it possible this is a configuration issue?
One significant difference between V1 and V2 is that V1 has per-cgroup
swappiness, which is inherited when the cgroup is created. So if you
set sysctl vm.swappiness=0 after cgroups have been created, it will
not update them. V2 cgroups do use vm.swappiness:
static inline int mem_cgroup_swappiness(struct mem_cgroup *memcg)
{
/* Cgroup2 doesn't have per-cgroup swappiness */
if (cgroup_subsys_on_dfl(memory_cgrp_subsys))
return vm_swappiness;
/* root ? */
if (mem_cgroup_disabled() || mem_cgroup_is_root(memcg))
return vm_swappiness;
return memcg->swappiness;
}
Is it possible the job cgroups on V1 have swappiness=60?
> So your patch does help return the old swappiness=0 behavior, but only for V2.
Thanks for verifying. I'll prepare a proper patch.
Powered by blists - more mailing lists