[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJD7tka-=ppaheVxn2-f6u0egBp8kOHYAK0bBudC62SKkZPk5w@mail.gmail.com>
Date: Thu, 15 Jun 2023 18:39:28 -0700
From: Yosry Ahmed <yosryahmed@...gle.com>
To: Zhongkun He <hezhongkun.hzk@...edance.com>
Cc: Yu Zhao <yuzhao@...gle.com>, minchan@...nel.org,
senozhatsky@...omium.org, mhocko@...e.com, linux-mm@...ck.org,
linux-kernel@...r.kernel.org,
Andrea Arcangeli <aarcange@...hat.com>,
David Hildenbrand <david@...hat.com>,
Fabian Deutsch <fdeutsch@...hat.com>
Subject: Re: [RFC PATCH 1/3] zram: charge the compressed RAM to the page's memcgroup
On Thu, Jun 15, 2023 at 1:57 AM Fabian Deutsch <fdeutsch@...hat.com> wrote:
>
>
> On Thu, Jun 15, 2023 at 6:59 AM Yu Zhao <yuzhao@...gle.com> wrote:
>>
>> On Wed, Jun 14, 2023 at 9:48 PM Zhongkun He
>> <hezhongkun.hzk@...edance.com> wrote:
>> >
>> > The compressed RAM is currently charged to kernel, not to
>> > any memory cgroup, which is not satisfy our usage scenario.
>> > if the memory of a task is limited by memcgroup, it will
>> > swap out the memory to zram swap device when the memory
>> > is insufficient. In that case, the memory limit will have
>> > no effect.
>> >
>> > So, it should makes sense to charge the compressed RAM to
>> > the page's memory cgroup.
>
>
> While looking at this in the past weeks, I believe that there are two distinct problems:
> 1. Direct zram usage by process within a cg ie. a process writing to a zram device
> 2. Indirect zram usage by a process within a cg via swap (described above)
>
> Both of them probably require different solutions.
> In order to fix #1, accounting a zram device should be accounted towards a cgroup. IMHO this is something that should be fixed.
>
> Yu Zhao and Yosry are probably much more familiar with the solution to #2.
> WRT per-cgrou-swapfile, to me this is addressing #2, but I agree with Yu Zhao, that there are probably better solutions to this.
Thanks Fabian for tagging me.
I am not familiar with #1, so I will speak to #2. Zhongkun, There are
a few parts that I do not understand -- hopefully you can help me out
here:
(1) If I understand correctly in this patch we set the active memcg
trying to charge any pages allocated in a zspage to the current memcg,
yet that zspage will contain multiple compressed object slots, not
just the one used by this memcg. Aren't we overcharging the memcg?
Basically the first memcg that happens to allocate the zspage will pay
for all the objects in this zspage, even after it stops using the
zspage completely?
(2) Patch 3 seems to be charging the compressed slots to the memcgs,
yet this patch is trying to charge the entire zspage. Aren't we double
charging the zspage? I am guessing this isn't happening because (as
Michal pointed out) we are not using __GFP_ACCOUNT here anyway, so
this patch may be NOP, and the actual charging is coming from patch 3
only.
(3) Zswap recently implemented per-memcg charging of compressed
objects in a much simpler way. If your main interest is #2 (which is
what I understand from the commit log), it seems like zswap might be
providing this already? Why can't you use zswap? Is it the fact that
zswap requires a backing swapfile?
Thanks!
>
> Lastly, this patchset, while it will possibly not address the swap issue (#2) completely, is it satisfying the needs of #1?
>
> - fabian
>
>>
>> We used to do this a long time ago, but we had per-memcg swapfiles [1[
>> to prevent compressed pages from different memcgs from sharing the
>> same zspage.
>>
>> Does this patchset alone suffer from the same problem, i.e., memcgs
>> sharing zspages?
>>
>> [1] https://lwn.net/Articles/592923/
>>
Powered by blists - more mailing lists