[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7ED1378A-AC39-48A2-8A2A-E06C7858DCE1@nvidia.com>
Date: Tue, 21 Nov 2023 11:45:22 -0500
From: Zi Yan <ziy@...dia.com>
To: Ryan Roberts <ryan.roberts@....com>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
"\"Huang, Ying\"" <ying.huang@...el.com>,
Andrew Morton <akpm@...ux-foundation.org>,
"\"Matthew Wilcox (Oracle)\"" <willy@...radead.org>,
David Hildenbrand <david@...hat.com>,
"\"Yin, Fengwei\"" <fengwei.yin@...el.com>,
Yu Zhao <yuzhao@...gle.com>, Vlastimil Babka <vbabka@...e.cz>,
"\"Kirill A . Shutemov\"" <kirill.shutemov@...ux.intel.com>,
Johannes Weiner <hannes@...xchg.org>,
Baolin Wang <baolin.wang@...ux.alibaba.com>,
Kemeng Shi <shikemeng@...weicloud.com>,
Mel Gorman <mgorman@...hsingularity.net>,
Rohan Puri <rohan.puri15@...il.com>,
Mcgrof Chamberlain <mcgrof@...nel.org>,
Adam Manzanares <a.manzanares@...sung.com>,
"\"Vishal Moola (Oracle)\"" <vishal.moola@...il.com>
Subject: Re: [PATCH v1 0/4] Enable >0 order folio memory compaction
On 21 Nov 2023, at 10:46, Ryan Roberts wrote:
>>
>> vm-scalability results
>> ===
>>
>> =========================================================================================
>> compiler/kconfig/rootfs/runtime/tbox_group/test/testcase:
>> gcc-13/defconfig/debian/300s/qemu-vm/mmap-xread-seq-mt/vm-scalability
>>
>> commit:
>> 6.6.0-rc4-mm-everything-2023-10-21-02-40+
>> 6.6.0-rc4-split-folio-in-compaction+
>> 6.6.0-rc4-folio-migration-in-compaction+
>> 6.6.0-rc4-folio-migration-free-page-split+
>> 6.6.0-rc4-folio-migration-free-page-split-sort-src+
>>
>> 6.6.0-rc4-mm-eve 6.6.0-rc4-split-folio-in-co 6.6.0-rc4-folio-migration-i 6.6.0-rc4-folio-migration-f 6.6.0-rc4-folio-migration-f
>> ---------------- --------------------------- --------------------------- --------------------------- ---------------------------
>> %stddev %change %stddev %change %stddev %change %stddev %change %stddev
>> \ | \ | \ | \ | \
>> 12896955 +2.7% 13249322 -4.0% 12385175 ± 5% +1.1% 13033951 -0.4% 12845698 vm-scalability.throughput
>
> Hi Zi,
>
> Are you able to add any commentary to these results as I'm struggling to
> interpret them; Is a positive or negative change better (are they times or
> rates?). What are the stddev values? The title suggests percent but the values
> are huge - I'm trying to understand what the error bars look like - are the
> swings real or noise?
The metric is vm-scalability.throughput, so the larger the better. Some %stddev
are not present since they are too small. For 6.6.0-rc4-folio-migration-in-compaction+,
%stddev is greater than %change, so the change might be noise.
Also, I talked to DavidH in last THP Cabal meeting about this. He suggested that
there are a lot of noise in vm-scalability like what I have here and I should
run more iterations and on bare metal. I am currently rerun them on a baremetal
and more iterations on the existing VM and report the results later. Please
note that the runs really take some time.
In addition, I will find other fragmentation-related benchmarks, so we can see
the impact on memory fragmentation.
--
Best Regards,
Yan, Zi
Download attachment "signature.asc" of type "application/pgp-signature" (855 bytes)
Powered by blists - more mailing lists