[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <SJ0PR11MB5678DE427870D9769D87ECC7C9952@SJ0PR11MB5678.namprd11.prod.outlook.com>
Date: Wed, 28 Aug 2024 19:24:15 +0000
From: "Sridhar, Kanchana P" <kanchana.p.sridhar@...el.com>
To: Nhat Pham <nphamcs@...il.com>
CC: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>, "hannes@...xchg.org"
<hannes@...xchg.org>, "yosryahmed@...gle.com" <yosryahmed@...gle.com>,
"ryan.roberts@....com" <ryan.roberts@....com>, "Huang, Ying"
<ying.huang@...el.com>, "21cnbao@...il.com" <21cnbao@...il.com>,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>, "Zou, Nanhai"
<nanhai.zou@...el.com>, "Feghali, Wajdi K" <wajdi.k.feghali@...el.com>,
"Gopal, Vinodh" <vinodh.gopal@...el.com>, "Sridhar, Kanchana P"
<kanchana.p.sridhar@...el.com>
Subject: RE: [PATCH v5 0/3] mm: ZSWAP swap-out of mTHP folios
Hi Nhat,
> -----Original Message-----
> From: Nhat Pham <nphamcs@...il.com>
> Sent: Wednesday, August 28, 2024 8:55 AM
> To: Sridhar, Kanchana P <kanchana.p.sridhar@...el.com>
> Cc: linux-kernel@...r.kernel.org; linux-mm@...ck.org;
> hannes@...xchg.org; yosryahmed@...gle.com; ryan.roberts@....com;
> Huang, Ying <ying.huang@...el.com>; 21cnbao@...il.com; akpm@...ux-
> foundation.org; Zou, Nanhai <nanhai.zou@...el.com>; Feghali, Wajdi K
> <wajdi.k.feghali@...el.com>; Gopal, Vinodh <vinodh.gopal@...el.com>
> Subject: Re: [PATCH v5 0/3] mm: ZSWAP swap-out of mTHP folios
>
> On Wed, Aug 28, 2024 at 2:35 AM Kanchana P Sridhar
> <kanchana.p.sridhar@...el.com> wrote:
> >
> > Hi All,
> >
> > This patch-series enables zswap_store() to accept and store mTHP
> > folios. The most significant contribution in this series is from the
> > earlier RFC submitted by Ryan Roberts [1]. Ryan's original RFC has been
> > migrated to v6.11-rc3 in patch 2/4 of this series.
> >
> > [1]: [RFC PATCH v1] mm: zswap: Store large folios without splitting
> > https://lore.kernel.org/linux-mm/20231019110543.3284654-1-
> ryan.roberts@....com/T/#u
> >
> > Additionally, there is an attempt to modularize some of the functionality
> > in zswap_store(), to make it more amenable to supporting any-order
> > mTHPs. For instance, the function zswap_store_entry() stores a
> zswap_entry
> > in the xarray. Likewise, zswap_delete_stored_offsets() can be used to
> > delete all offsets corresponding to a higher order folio stored in zswap.
> >
> > For accounting purposes, the patch-series adds per-order mTHP sysfs
> > "zswpout" counters that get incremented upon successful zswap_store of
> > an mTHP folio:
> >
> > /sys/kernel/mm/transparent_hugepage/hugepages-*kB/stats/zswpout
> >
> > This patch-series is a precursor to ZSWAP compress batching of mTHP
> > swap-out and decompress batching of swap-ins based on
> swapin_readahead(),
> > using Intel IAA hardware acceleration, which we would like to submit in
> > subsequent RFC patch-series, with performance improvement data.
> >
> > Thanks to Ying Huang for pre-posting review feedback and suggestions!
> >
> > Changes since v4:
> > =================
> > 1) Published before/after data with zstd, as suggested by Nhat (Thanks
> > Nhat for the data reviews!).
> > 2) Rebased to mm-unstable from 8/27/2024,
> > commit b659edec079c90012cf8d05624e312d1062b8b87.
> > 3) Incorporated the change in memcontrol.h that defines obj_cgroup_get() if
> > CONFIG_MEMCG is not defined, to resolve build errors reported by kernel
> > robot; as per Nhat's and Michal's suggestion to not require a separate
> > patch to fix the build errors (thanks both!).
> > 4) Deleted all same-filled folio processing in zswap_store() of mTHP, as
> > suggested by Yosry (Thanks Yosry!).
> > 5) Squashed the commits that define new mthp zswpout stat counters, and
> > invoke count_mthp_stat() after successful zswap_store()s; into a single
> > commit. Thanks Yosry for this suggestion!
> >
> > Changes since v3:
> > =================
> > 1) Rebased to mm-unstable commit
> 8c0b4f7b65fd1ca7af01267f491e815a40d77444.
> > Thanks to Barry for suggesting aligning with Ryan Roberts' latest
> > changes to count_mthp_stat() so that it's always defined, even when THP
> > is disabled. Barry, I have also made one other change in page_io.c
> > where count_mthp_stat() is called by count_swpout_vm_event(). I would
> > appreciate it if you can review this. Thanks!
> > Hopefully this should resolve the kernel robot build errors.
> >
> > Changes since v2:
> > =================
> > 1) Gathered usemem data using SSD as the backing swap device for zswap,
> > as suggested by Ying Huang. Ying, I would appreciate it if you can
> > review the latest data. Thanks!
> > 2) Generated the base commit info in the patches to attempt to address
> > the kernel test robot build errors.
> > 3) No code changes to the individual patches themselves.
> >
> > Changes since RFC v1:
> > =====================
> >
> > 1) Use sysfs for zswpout mTHP stats, as per Barry Song's suggestion.
> > Thanks Barry!
> > 2) Addressed some of the code review comments that Nhat Pham provided
> in
> > Ryan's initial RFC [1]:
> > - Added a comment about the cgroup zswap limit checks occuring once
> per
> > folio at the beginning of zswap_store().
> > Nhat, Ryan, please do let me know if the comments convey the summary
> > from the RFC discussion. Thanks!
> > - Posted data on running the cgroup suite's zswap kselftest.
> > 3) Rebased to v6.11-rc3.
> > 4) Gathered performance data with usemem and the rebased patch-series.
> >
> > Performance Testing:
> > ====================
> > Testing of this patch-series was done with the v6.11-rc3 mainline, without
> > and with this patch-series, on an Intel Sapphire Rapids server,
> > dual-socket 56 cores per socket, 4 IAA devices per socket.
> >
> > The system has 503 GiB RAM, with 176GiB ZRAM (35% of available RAM) as
> the
> > backing swap device for ZSWAP. zstd is configured as the ZRAM compressor.
> > Core frequency was fixed at 2500MHz.
> >
> > The vm-scalability "usemem" test was run in a cgroup whose memory.high
> > was fixed at 40G. The is no swap limit set for the cgroup. Following a
>
> I thought it was 60G. Why are we reducing it to 40G here? Just curious :)
That's correct, Nhat. This is going back to the original 40G memory.high setup
that Ryan has reported using in [2].
[2] https://lore.kernel.org/linux-mm/20240408183946.2991168-1-ryan.roberts@arm.com/
Since I am back to using the 176GiB ZRAM as the backing swap device for ZSWAP,
I could use the more stringent 40G limit.
I had to increase the memory limit for the v4 experiments with 4G SSD swap for
the experiment to be viable and still generate swap-out activity, as follows:
64K mTHP experiments: cgroup memory fixed at 60G
2M THP experiments : cgroup memory fixed at 55G
>
> > similar methodology as in Ryan Roberts' "Swap-out mTHP without splitting"
> > series [2], 70 usemem processes were run, each allocating and writing 1G of
> > memory:
> >
> > usemem --init-time -w -O -n 70 1g
> >
> > The vm/sysfs mTHP stats included with the performance data provide
> details
> > on the swapout activity to ZSWAP/swap.
> >
> > Other kernel configuration parameters:
> >
> > ZSWAP Compressors : zstd, deflate-iaa
> > ZSWAP Allocator : zsmalloc
> > SWAP page-cluster : 2
> >
> > In the experiments where "deflate-iaa" is used as the ZSWAP compressor,
> > IAA "compression verification" is enabled. Hence each IAA compression
> > will be decompressed internally by the "iaa_crypto" driver, the crc-s
> > returned by the hardware will be compared and errors reported in case of
> > mismatches. Thus "deflate-iaa" helps ensure better data integrity as
> > compared to the software compressors.
> >
> > Throughput is derived by averaging the individual 70 processes' throughputs
> > reported by usemem. sys time is measured with perf. All data points are
> > averaged across 3 runs.
> >
> > 64KB mTHP (cgroup memory.high set to 40G):
> > ==========================================
> >
> > ------------------------------------------------------------------------------
> > v6.11-rc3 mainline zswap-mTHP Change wrt
> > Baseline Baseline
> > ------------------------------------------------------------------------------
> > ZSWAP compressor zstd deflate- zstd deflate- zstd deflate-
> > iaa iaa iaa
> > ------------------------------------------------------------------------------
> > Throughput (KB/s) 161,496 156,343 140,363 151,938 -13% -3%
> > sys time (sec) 771.68 802.08 954.85 735.47 -24% 8%
> > memcg_high 111,223 110,889 138,651 133,884
> > memcg_swap_high 0 0 0 0
> > memcg_swap_fail 0 0 0 0
> > pswpin 16 16 0 0
> > pswpout 7,471,472 7,527,963 0 0
> > zswpin 635 605 624 639
> > zswpout 1,509 1,478 9,453,761 9,385,910
> > thp_swpout 0 0 0 0
> > thp_swpout_ 0 0 0 0
> > fallback
> > pgmajfault 3,616 3,430 4,633 3,611
> > ZSWPOUT-64kB n/a n/a 590,768 586,521
> > SWPOUT-64kB 466,967 470,498 0 0
> > ------------------------------------------------------------------------------
> >
> > 2MB PMD-THP/2048K mTHP (cgroup memory.high set to 40G):
> > =======================================================
> >
> > ------------------------------------------------------------------------------
> > v6.11-rc3 mainline zswap-mTHP Change wrt
> > Baseline Baseline
> > ------------------------------------------------------------------------------
> > ZSWAP compressor zstd deflate- zstd deflate- zstd deflate-
> > iaa iaa iaa
> > ------------------------------------------------------------------------------
> > Throughput (KB/s) 192,164 194,643 165,005 174,536 -14% -10%
> > sys time (sec) 823.55 830.42 801.72 676.65 3% 19%
> > memcg_high 16,054 15,936 14,951 16,096
> > memcg_swap_high 0 0 0 0
> > memcg_swap_fail 0 0 0 0
> > pswpin 0 0 0 0
> > pswpout 8,629,248 8,628,907 0 0
> > zswpin 560 645 5,333 781
> > zswpout 1,416 1,503 8,546,895 9,355,760
> > thp_swpout 16,854 16,853 0 0
> > thp_swpout_ 0 0 0 0
> > fallback
> > pgmajfault 3,341 3,574 8,139 3,582
> > ZSWPOUT-2048kB n/a n/a 16,684 18,270
> > SWPOUT-2048kB 16,854 16,853 0 0
> > ------------------------------------------------------------------------------
>
> OK these numbers are much more positive now. Some observation:
>
> 1. The pswpout and zswpout cells are much more sane now. I still think
> we have issues with the way zswap cgroup charging interacts with our
> reclaim dynamics, but my theory is that these issues only manifest in
> more extreme conditions - high concurrency + fast reclaim path ==
> memory.high limit constantly violated, leading to the vicious cycle of
> overreclaim? zstd has a much better compression ratio than lz4, so
> that probably lowers the violation amount per iteration, which
> compounds overtime and drastically reduces the overreclaiming issue.
> We probably should still investigate and fix it though.
I agree with this analysis and summary!
>
> 2. That said, there are still regressions with respect to the mTHP
> case. But it is outperforming in big THP now! This is strange.
Yes. Although, it is possible that the kernel optimizations for PMD-size THP
are helping in this case.
>
> 3. I also noticed that your pswpin and zswpin rows are all 0 or really
> small. Is this why we are not seeing much gains with zswap? I mean, if
> you are not going to use these pages, offloading them to swap is
> better by definition... I wonder if lowering the memory limit even
> further would show positive numbers? Or
Great observation. I suppose this is in part due to the nature of the workload,
which (as in my latest reply to Yosry to his comments on v4) accesses each 8-bytes
chunk to write to it once, and that's it. Also, because of the fact that when the
workload exits, the zswap zpool size is 0 in case of 64K mTHP, combined with
the very few swapins, it appears that the swapped out folios were mostly
part of the working set, not faulted back in (hence "cold" memory) but were
ultimately released when the workload exited.
In the case of 2M THP however, the kernel seems to have reclaimed truly
cold memory, since the zswap zpool size is 3,134,619,648 (3.1G) after the workload exits.
>
> >
> > In the "Before" scenario, when zswap does not store mTHP, only allocations
> > count towards the cgroup memory limit. However, in the "After" scenario,
> > with the introduction of zswap_store() mTHP, both, allocations as well as
> > the zswap compressed pool usage from all 70 processes are counted
> towards
> > the memory limit. As a result, we see higher swapout activity in the
> > "After" data. Hence, more time is spent doing reclaim as the zswap cgroup
> > charge leads to more frequent memory.high breaches.
> >
> > This causes degradation in throughput and sys time with zswap mTHP, more
> so
> > in case of zstd than deflate-iaa. Compress latency could play a part in
> > this - when there is more swapout activity happening, a slower compressor
> > would cause allocations to stall for any/all of the 70 processes.
> >
> > In my opinion, even though the test set up does not provide an accurate
> > way for a direct before/after comparison (because of zswap usage being
> > counted in cgroup, hence towards the memory.high), it still seems
> > reasonable for zswap_store to support (m)THP, so that further performance
> > improvements can be implemented.
>
> Can we add a knob/config to enable/disable this? Just in case we are
> regressing software compressor users for the sake of hardware
> compressor users. Especially when the former are the majority of the
> users, and the latter requires more investment :)
Sure, I am thinking it would be better to add a config variable, say,
CONFIG_THP_ZSWAP_STORE that is OFF by default? If you think this sounds Ok,
I will submit a v6 with this change.
>
> >
> > One of the ideas that has shown promise in our experiments is to improve
> > ZSWAP mTHP store performance using batching. With IAA
> compress/decompress
> > batching used in ZSWAP, we are able to demonstrate significant
> > performance improvements and memory savings with IAA in scalability
> > experiments, as compared to software compressors. We hope to submit
> > this work as subsequent RFCs.
> >
> > I would greatly appreciate your code review comments and suggestions!
> >
> > Thanks,
> > Kanchana
>
> Thanks for the hard work, Kanchana!
Thanks Nhat :) I really appreciate your reviews, comments and analysis!
Thanks,
Kanchana
>
> >
> > [2] https://lore.kernel.org/linux-mm/20240408183946.2991168-1-
> ryan.roberts@....com/
> >
> >
> > Kanchana P Sridhar (3):
> > mm: Define obj_cgroup_get() if CONFIG_MEMCG is not defined.
> > mm: zswap: zswap_store() extended to handle mTHP folios.
> > mm: swap: Count successful mTHP ZSWAP stores in sysfs mTHP zswpout
> > stats.
> >
> > include/linux/huge_mm.h | 1 +
> > include/linux/memcontrol.h | 4 +
> > mm/huge_memory.c | 3 +
> > mm/page_io.c | 3 +-
> > mm/zswap.c | 231 +++++++++++++++++++++++++++----------
> > 5 files changed, 180 insertions(+), 62 deletions(-)
> >
> >
> > base-commit: b659edec079c90012cf8d05624e312d1062b8b87
> > --
> > 2.27.0
> >
Powered by blists - more mailing lists