[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7d4e59b7-ddaf-45b9-909c-9ecb8ff5a34d@bytedance.com>
Date: Mon, 25 Dec 2023 22:32:44 +0800
From: Chengming Zhou <zhouchengming@...edance.com>
To: Chris Li <chriscli@...gle.com>
Cc: Yosry Ahmed <yosryahmed@...gle.com>, Nhat Pham <nphamcs@...il.com>,
Seth Jennings <sjenning@...hat.com>, Vitaly Wool <vitaly.wool@...sulko.com>,
Dan Streetman <ddstreet@...e.org>, Johannes Weiner <hannes@...xchg.org>,
Andrew Morton <akpm@...ux-foundation.org>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org
Subject: Re: [PATCH v3 6/6] mm/zswap: directly use percpu mutex and buffer in
load/store
On 2023/12/23 01:37, Chris Li wrote:
> Hi Chengming,
>
> The patch looks good to me.
>
> Acked-by: Chris Li <chrisl@...nel.org> (Google)
>
Thanks.
[...]
>>
>> I think there are two viewpoints here, both works ok to me.
>>
>> The first is from ownship or lifetime, these percpu mutex and dstmem
>> are shared between all pools, so no one pool own the mutex and dstmem,
>> nor does the percpu acomp_ctx in each pool.
>>
>> The second is from usage, these percpu mutex and dstmem are used by
>> the percpu acomp_ctx in each pool, so it's easy to use to put pointers
>> to them in the percpu acomp_ctx.
>>
>> Actually I think it's simpler to let the percpu acomp_ctx has its own
>> mutex and dstmem, which in fact are the necessary parts when it use
>> the acomp interfaces.
>
> Agree, that is why I prefer to keep the struct together. I am fine
> with what Yosry suggested and the anonymous struct, just consider it
> is not critically necessary.
>
Agree, I have no strong opinion about it, so will just leave it as it is.
>>
>> This way, we could delete the percpu mutex and dstmem, and its hotplugs,
>
> That is the real value of this patch. Thanks for doing that.
>
>> and not shared anymore between all pools. Maybe we would have many pools
>> at the same time in the future, like different compression algorithm or
>> different zpool for different memcg workloads. Who knows? :-)
>
> As long as the zswap is not re-enterable, e.g. never have the nested
> page fault that causes zswap_load within another zswap_load, I think
> we are fine having more than one pool share the buffer. In fact, if we
> trigger the nested zswap load, I expect the kernel will dead lock on
> the nested mutex acquire because the mutex is already taken. We will
> know about kernel panic rather than selient memory corruption.
>
>>
>> So how about this patch below? Just RFC.
>>
>> Subject: [PATCH] mm/zswap: make each acomp_ctx has its own mutex and dstmem
>
> Thank you for doing that, you can consider submitting it as the real
> patch instead of the RFC. I see real value in this patch removing some
> per CPU fields.
Ok, will update.
Thanks!
Powered by blists - more mailing lists