lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5f785814-0be0-407f-87a0-4bfb4041fa2d@gmail.com>
Date: Thu, 15 Aug 2024 11:50:37 +0800
From: Alex Shi <seakeel@...il.com>
To: Sergey Senozhatsky <senozhatsky@...omium.org>
Cc: Matthew Wilcox <willy@...radead.org>, Yosry Ahmed
 <yosryahmed@...gle.com>, Andrew Morton <akpm@...ux-foundation.org>,
 alexs@...nel.org, Vitaly Wool <vitaly.wool@...sulko.com>,
 Miaohe Lin <linmiaohe@...wei.com>, linux-kernel@...r.kernel.org,
 linux-mm@...ck.org, minchan@...nel.org, david@...hat.com,
 42.hyeyoo@...il.com, nphamcs@...il.com
Subject: Re: [PATCH v5 00/21] mm/zsmalloc: add zpdesc memory descriptor for
 zswap.zpool



On 8/15/24 11:13 AM, Sergey Senozhatsky wrote:
> On (24/08/09 10:32), Alex Shi wrote:
> [..]
>>>> and we "chain" zpdesc-s to form a zspage, and make each of them point to
>>>> a corresponding struct page (memdesc -> *page), then it'll resemble current
>>>> zsmalloc and should work for everyone? I also assume for zspdesc-s zsmalloc
>>>> will need to maintain a dedicated kmem_cache?
>>> Right, we could do that.  Each memdesc has to be a multiple of 16 bytes,
>>> sp we'd be doing something like allocating 32 bytes for each page.
>>> Is there really 32 bytes of information that we want to store for
>>> each page?  Or could we store all of the information in (a somewhat
>>> larger) zspage?  Assuming we allocate 3 pages per zspage, if we allocate
>>> an extra 64 bytes in the zspage, we've saved 32 bytes per zspage.
>>
>> Thanks for the suggestions! Yes, it's a good direction we could try after this
>> patchset.
> 
> Alex, may I ask what exactly you will "try"?

Hi Sergey,

Thanks for question. As a quick amateur thought, the final result may like following,
please correct me if I am wrong.

1, there is a memdesc for each of memory page.

2, we kmem_alloc some zpdesc struct for specifically our needs, like zpdesc.next
   zpdesc.zspage/first_obj_offset, these current we used in zsmalloc.

3, there is a gap between memdesc and zpdesc, like .flags, _refcount, .mops etc.
   this part is still unclear that how to handle them well.

During the 2nd, 3rd steps, we may have chance to move some members from zpdesc, to
zspage? but it's also unclear. 

Thanks
Alex 


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ