lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Fri, 05 Jan 2024 17:56:08 -0500
From: Zi Yan <ziy@...dia.com>
To: Ryan Roberts <ryan.roberts@....com>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
 "\"Huang, Ying\"" <ying.huang@...el.com>,
 Andrew Morton <akpm@...ux-foundation.org>,
 "\"Matthew Wilcox (Oracle)\"" <willy@...radead.org>,
 David Hildenbrand <david@...hat.com>,
 "\"Yin, Fengwei\"" <fengwei.yin@...el.com>, Yu Zhao <yuzhao@...gle.com>,
 Vlastimil Babka <vbabka@...e.cz>,
 "\"Kirill A . Shutemov\"" <kirill.shutemov@...ux.intel.com>,
 Johannes Weiner <hannes@...xchg.org>,
 Baolin Wang <baolin.wang@...ux.alibaba.com>,
 Kemeng Shi <shikemeng@...weicloud.com>,
 Mel Gorman <mgorman@...hsingularity.net>,
 Rohan Puri <rohan.puri15@...il.com>, Mcgrof Chamberlain <mcgrof@...nel.org>,
 Adam Manzanares <a.manzanares@...sung.com>,
 "\"Vishal Moola (Oracle)\"" <vishal.moola@...il.com>
Subject: Re: [PATCH v1 0/4] Enable >0 order folio memory compaction

On 3 Jan 2024, at 10:51, Zi Yan wrote:

> On 3 Jan 2024, at 4:12, Ryan Roberts wrote:
>
>> On 02/01/2024 20:50, Zi Yan wrote:
>>> On 21 Nov 2023, at 12:11, Ryan Roberts wrote:
>>>
>>>> On 21/11/2023 16:45, Zi Yan wrote:
>>>>> On 21 Nov 2023, at 10:46, Ryan Roberts wrote:
>>>>>
>>>>>>>
>>>>>>> vm-scalability results
>>>>>>> ===
>>>>>>>
>>>>>>> =========================================================================================
>>>>>>> compiler/kconfig/rootfs/runtime/tbox_group/test/testcase:
>>>>>>>   gcc-13/defconfig/debian/300s/qemu-vm/mmap-xread-seq-mt/vm-scalability
>>>>>>>
>>>>>>> commit:
>>>>>>>   6.6.0-rc4-mm-everything-2023-10-21-02-40+
>>>>>>>   6.6.0-rc4-split-folio-in-compaction+
>>>>>>>   6.6.0-rc4-folio-migration-in-compaction+
>>>>>>>   6.6.0-rc4-folio-migration-free-page-split+
>>>>>>>   6.6.0-rc4-folio-migration-free-page-split-sort-src+
>>>>>>>
>>>>>>> 6.6.0-rc4-mm-eve 6.6.0-rc4-split-folio-in-co 6.6.0-rc4-folio-migration-i 6.6.0-rc4-folio-migration-f 6.6.0-rc4-folio-migration-f
>>>>>>> ---------------- --------------------------- --------------------------- --------------------------- ---------------------------
>>>>>>>          %stddev     %change         %stddev     %change         %stddev     %change         %stddev     %change         %stddev
>>>>>>>              \          |                \          |                \          |                \          |                \
>>>>>>>   12896955            +2.7%   13249322            -4.0%   12385175 ±  5%      +1.1%   13033951            -0.4%   12845698        vm-scalability.throughput
>>>>>>
>>>>>> Hi Zi,
>>>>>>
>>>>>> Are you able to add any commentary to these results as I'm struggling to
>>>>>> interpret them; Is a positive or negative change better (are they times or
>>>>>> rates?). What are the stddev values? The title suggests percent but the values
>>>>>> are huge - I'm trying to understand what the error bars look like - are the
>>>>>> swings real or noise?
>>>>>
>>>>> The metric is vm-scalability.throughput, so the larger the better. Some %stddev
>>>>> are not present since they are too small. For 6.6.0-rc4-folio-migration-in-compaction+,
>>>>> %stddev is greater than %change, so the change might be noise.
>>>>
>>>> Ahh got it - thanks!
>>>>
>>>>>
>>>>> Also, I talked to DavidH in last THP Cabal meeting about this. He suggested that
>>>>> there are a lot of noise in vm-scalability like what I have here and I should
>>>>> run more iterations and on bare metal. I am currently rerun them on a baremetal
>>>>> and more iterations on the existing VM and report the results later. Please
>>>>> note that the runs really take some time.
>>>>
>>>> Ahh ok, I'll wait for the bare metal numbers and will disregard these for now.
>>>> Thanks!
>>>
>>> It seems that the unexpected big mmap-pread-seq-mt perf drop came from the mistake I
>>> made in patch 1. After fixing that, mmap-pread-seq-mt perf only drops 0.5%. The new
>>> results on top of 6.7.0-rc1-mm-everything-2023-11-15-00-17 are at the end of the email.
>>
>> Good news! I don't see the results for mmap-pread-seq-mt below - perhaps you
>> forgot to include it?
>
> The stats below only shows significant changes and mmap-pread-seq-mt delta is less
> than 5%, thus it is not shown.
>
>>
>>>
>>> I am preparing v2 and will send it out soon.
>>>
>>> =========================================================================================
>>> compiler/kconfig/rootfs/runtime/tbox_group/test/testcase:
>>>   gcc-13/defconfig/debian/300s/qemu-vm/mmap-xread-seq-mt/vm-scalability
>>>
>>> commit:
>>>   6.7.0-rc1-mm-everything-2023-11-15-00-17+
>>>   6.7.0-rc1-split-folio-in-compaction+
>>>   6.7.0-rc1-folio-migration-in-compaction+
>>>   6.7.0-rc1-folio-migration-free-page-split+
>>>   6.7.0-rc1-folio-migration-free-page-split-sort-src+
>>>
>>> 6.7.0-rc1-mm-eve 6.7.0-rc1-split-folio-in-co 6.7.0-rc1-folio-migration-i 6.7.0-rc1-folio-migration-f 6.7.0-rc1-folio-migration-f
>>> ---------------- --------------------------- --------------------------- --------------------------- ---------------------------
>>>          %stddev     %change         %stddev     %change         %stddev     %change         %stddev     %change         %stddev
>>>              \          |                \          |                \          |                \          |                \
>>>   13041962           +16.1%   15142976            +5.0%   13690666 ±  6%      +6.7%   13920441            +5.5%   13762582        vm-scalability.throughput
>>
>> I'm still not sure I'm interpretting this correctly; is %change always relative
>> to 6.7.0-rc1-mm-everything-2023-11-15-00-17 or is it relative to the previous
>> commit?
>
> The former, always relative to 6.7.0-rc1-mm-everything-2023-11-15-00-17.
>
>>
>> If the former, then it looks like splitting the folios is actually faster than
>> migrating them whole?
>
> Yes, I will look into it when I am preparing the next version.
>

The reason seems to be that compaction ends early when migrating folios as a whole.
It happens when a order-0 folio is migrated and there is no order-0 free page,
then migrate_pages() returns -ENOMEM making compact_zone() stop compaction (for
higher order folios, they would be split). This should be fixed by enabling
free page split optimization, but the perf number does not say so. Let me dig more.


--
Best Regards,
Yan, Zi

Download attachment "signature.asc" of type "application/pgp-signature" (855 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ