[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <be7fbc94-cd3b-41b3-ac20-5c46aad9aa84@konsulko.se>
Date: Tue, 16 Sep 2025 13:16:06 +0200
From: Vitaly Wool <vitaly.wool@...sulko.se>
To: Yosry Ahmed <yosry.ahmed@...ux.dev>
Cc: Sergey Senozhatsky <senozhatsky@...omium.org>,
Vlastimil Babka <vbabka@...e.cz>, hannes@...xchg.org,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
Andrew Morton <akpm@...ux-foundation.org>,
Christoph Hellwig <hch@...radead.org>
Subject: Re: [PATCH 0/3] mm: remove zpool
On 9/15/25 21:37, Yosry Ahmed wrote:
> On Sat, Sep 13, 2025 at 03:55:16PM +0200, Vitaly Wool wrote:
>>
>>
>>> On Sep 9, 2025, at 10:12 PM, Yosry Ahmed <yosry.ahmed@...ux.dev> wrote:
>>>
>>> On Mon, Sep 08, 2025 at 09:18:01PM +0900, Sergey Senozhatsky wrote:
>>>> On (25/09/06 14:25), Sergey Senozhatsky wrote:
>>>>> On (25/09/05 19:57), Yosry Ahmed wrote:
>>>>>> I think Android uses zram+zsmalloc with 16K pages. Perhaps Sergey could
>>>>>> confirm.
>>>>>
>>>>> I'm not working on android directly,
>>>>>
>>>>> I can confirm that android uses zram+zsmalloc. As of 16K pages, there
>>>>> was a way to toggle 16k pages on android (via system settings), I don't
>>>>> know if this is the default now.
>>>>
>>>> While I don't know what zsmalloc struggles Vitaly is referring to in
>>>> particular, off the top of my head, zsmalloc does memcpy()'s for objects
>>>> that span multiple pages, when zsmalloc kmap()'s both physical pages and
>>>> memcpy()'s chunks of the object into a provided buffer. With 16K pages
>>>> we can have rather larger compressed objects, so those memcpy() are likely
>>>> more visible. Attacking this would be a good idea, I guess.
>>>
>>> Yeah I personally think attacking whatever problems zsmalloc has with
>>> 16K pages is the way to go.
>>
>> Well, there is a way out for 16+K pages, that being:
>> * restricting zsmalloc to not have objects spanning across 2 pages
>> * reworking size_classes based allocation to have uneven steps
>> * as a result of the above, organising binary search for the right size object
>>
>> This will effectively turn zsmalloc into zblock, with some extra cruft that makes it far less comprehensible.
>
> I think the way to go would be this, identifying problems with 16K on
> zsmalloc, and addressing them one by one in a data-driven way.
>
> I don't believe there will be opposition to this, or even adding more
> tunables / config options to alter zsmalloc's behavior based on the
> environment. If there's indeed extra cruft, we can either clean it up or
> hide it behind config/tunabels so that it's only enabled when needed.
>
>>
>> Another option would be to leave zsmalloc do its job on 4K pages and use zblock for bigger pages. But it is not possible at the moment because zpool api has been removed. Thats’s why I NACK’ed the zpool removal, at least until we have a replacement for it ready.
>
> I think having a separate allocator that's better for each page size is
> not a good option tbh.
I don't think anyone has been talking about a separate allocator for
each page size. The idea was that zsmalloc (as a well-tested and
well-performing _on 4K pages_ allocator) stays the default option for 4K
pages, and zblock becomes the default for other page sizes.
Powered by blists - more mailing lists