lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87r0cw5l3w.fsf@yhuang6-desk2.ccr.corp.intel.com>
Date: Mon, 17 Jun 2024 11:00:51 +0800
From: "Huang, Ying" <ying.huang@...el.com>
To: Barry Song <21cnbao@...il.com>, akpm@...ux-foundation.org,
  chrisl@...nel.org
Cc: baohua@...nel.org,  kaleshsingh@...gle.com,  kasong@...cent.com,
  linux-kernel@...r.kernel.org,  linux-mm@...ck.org,  ryan.roberts@....com
Subject: Re: [PATCH v2 0/2] mm: swap: mTHP swap allocator base on swap
 cluster order


Barry Song <21cnbao@...il.com> writes:

> On Sat, Jun 15, 2024 at 2:59 PM Andrew Morton <akpm@...ux-foundation.org> wrote:
>>
>> On Fri, 14 Jun 2024 19:51:11 -0700 Chris Li <chrisl@...nel.org> wrote:
>>
>> > > I'm having trouble understanding the overall impact of this on users.
>> > > We fail the mTHP swap allocation and fall back, but things continue to
>> > > operate OK?
>> >
>> > Continue to operate OK in the sense that the mTHP will have to split
>> > into 4K pages before the swap out, aka the fall back. The swap out and
>> > swap in can continue to work as 4K pages, not as the mTHP. Due to the
>> > fallback, the mTHP based zsmalloc compression with 64K buffer will not
>> > happen. That is the effect of the fallback. But mTHP swap out and swap
>> > in is relatively new, it is not really a regression.
>>
>> Sure, but it's pretty bad to merge a new feature only to have it
>> ineffective after a few hours use.
>>
>> > >
>> > > > There is some test number in the V1 thread of this series:
>> > > > https://lore.kernel.org/r/20240524-swap-allocator-v1-0-47861b423b26@kernel.org
>> > >
>> > > Well, please let's get the latest numbers into the latest patchset.
>> > > Along with a higher-level (and quantitative) description of the user impact.
>> >
>> > I will need Barray's help to collect the number. I don't have the
>> > setup to reproduce his test result.
>> > Maybe a follow up commit message amendment for the test number when I get it?
>
> Although the issue may seem complex at a systemic level, even a small program can
> demonstrate the problem and highlight how Chris's patch has improved the
> situation.
>
> To demonstrate this, I designed a basic test program that maximally allocates
> two memory blocks:
>
>  *   A memory block of up to 60MB, recommended for HUGEPAGE usage
>  *   A memory block of up to 1MB, recommended for NOHUGEPAGE usage
>
> In the system configuration, I enabled 64KB mTHP and 64MB zRAM, providing more than
> enough space for both the 60MB and 1MB allocations in the worst case. This setup
> allows us to assess two effects:
>
> 1.  When we don't enable mem2 (small folios), we consistently allocate and free
>     swap slots aligned with 64KB.  whether there is a risk of failure to obtain
>     swap slots even though the zRAM has sufficient free space?
> 2.  When we enable mem2 (small folios), the presence of small folios may lead
>     to fragmentation of clusters, potentially impacting the swapout process for
>     large folios negatively.
>

IIUC, the test results are based on not-yet-merged patchset [1] (mm:
support large folios swap-in)?

[1] https://lore.kernel.org/linux-mm/20240304081348.197341-1-21cnbao@gmail.com/

If so, do we have any visible effect without that?  If not, should we
wait for patchset [1] (or something similar) to be merged firstly?

--
Best Regards,
Huang, Ying


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ