[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <001bf379-9026-fd7a-3fff-c1b2cea35348@redhat.com>
Date: Tue, 19 Apr 2022 15:37:51 -0400
From: Nico Pache <npache@...hat.com>
To: Johannes Weiner <hannes@...xchg.org>
Cc: linux-mm@...ck.org, akpm@...ux-foundation.org,
linux-kernel@...r.kernel.org, aquini@...hat.com,
shakeelb@...gle.com, llong@...hat.com, mhocko@...e.com,
hakavlad@...ox.lv
Subject: Re: [PATCH v3] vm_swappiness=0 should still try to avoid swapping
anon memory
Hi Johannes,
On 4/19/22 14:46, Johannes Weiner wrote:
> Hi Nico,
>
> On Tue, Apr 19, 2022 at 02:11:53PM -0400, Nico Pache wrote:
>> I think its is important to note the issue we are seeing has greatly improved
>> since the initial posting. However we have noticed that the issue is still
>> present (and significantly worse) when cgroupV1 is set.
>>
>> We were initially testing with CgroupV1 and later found that the issue was not
>> as bad in CgroupV2 (but was still an noticeable issue). This is also resulting
>> in the splitting of THPs in the host kernel.
>
> When swappiness is 0, cgroup limit reclaim has a fixed SCAN_FILE
> branch, so it shouldn't ever look at anon. I'm assuming you're getting
> global reclaim mixed in. Indeed, I think we can try harder not to swap
> for global reclaim if the user asks for that.
We aren't actually utilizing the cgroup mechanism; however, switching between
the two has a noticeable affect on the global reclaim of the system. This is not
a writeback case either-- The reproducer simply reads. So I think we can rule
out the v2 writeback controller being involved. My initial patch was also
targeting swappiness=0 but this also occurs when >0.
>
> Can you try the below patch?
of course thanks for that :) I'll let you know how it goes!
Cheers,
-- Nico
Powered by blists - more mailing lists