lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87msk2vgd4.fsf@yhuang6-desk2.ccr.corp.intel.com>
Date: Fri, 20 Sep 2024 17:12:07 +0800
From: "Huang, Ying" <ying.huang@...el.com>
To: "Sridhar, Kanchana P" <kanchana.p.sridhar@...el.com>
Cc: Nhat Pham <nphamcs@...il.com>,  Yosry Ahmed <yosryahmed@...gle.com>,
  "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
  "linux-mm@...ck.org" <linux-mm@...ck.org>,  "hannes@...xchg.org"
 <hannes@...xchg.org>,  "chengming.zhou@...ux.dev"
 <chengming.zhou@...ux.dev>,  "usamaarif642@...il.com"
 <usamaarif642@...il.com>,  "ryan.roberts@....com" <ryan.roberts@....com>,
  "21cnbao@...il.com" <21cnbao@...il.com>,  "akpm@...ux-foundation.org"
 <akpm@...ux-foundation.org>,  "Zou, Nanhai" <nanhai.zou@...el.com>,
  "Feghali, Wajdi K" <wajdi.k.feghali@...el.com>,  "Gopal, Vinodh"
 <vinodh.gopal@...el.com>
Subject: Re: [PATCH v6 0/3] mm: ZSWAP swap-out of mTHP folios

"Sridhar, Kanchana P" <kanchana.p.sridhar@...el.com> writes:

> Hi Nhat,
>
>> -----Original Message-----
>> From: Nhat Pham <nphamcs@...il.com>
>> Sent: Thursday, August 29, 2024 4:46 PM
>> To: Yosry Ahmed <yosryahmed@...gle.com>
>> Cc: Sridhar, Kanchana P <kanchana.p.sridhar@...el.com>; linux-
>> kernel@...r.kernel.org; linux-mm@...ck.org; hannes@...xchg.org;
>> chengming.zhou@...ux.dev; usamaarif642@...il.com;
>> ryan.roberts@....com; Huang, Ying <ying.huang@...el.com>;
>> 21cnbao@...il.com; akpm@...ux-foundation.org; Zou, Nanhai
>> <nanhai.zou@...el.com>; Feghali, Wajdi K <wajdi.k.feghali@...el.com>;
>> Gopal, Vinodh <vinodh.gopal@...el.com>
>> Subject: Re: [PATCH v6 0/3] mm: ZSWAP swap-out of mTHP folios
>> 
>> On Thu, Aug 29, 2024 at 3:49 PM Yosry Ahmed <yosryahmed@...gle.com>
>> wrote:
>> >
>> > On Thu, Aug 29, 2024 at 2:27 PM Kanchana P Sridhar
>> >
>> > We are basically comparing zram with zswap in this case, and it's not
>> > fair because, as you mentioned, the zswap compressed data is being
>> > accounted for while the zram compressed data isn't. I am not really
>> > sure how valuable these test results are. Even if we remove the cgroup
>> > accounting from zswap, we won't see an improvement, we should expect a
>> > similar performance to zram.
>> >
>> > I think the test results that are really valuable are case 1, where
>> > zswap users are currently disabling CONFIG_THP_SWAP, and get to enable
>> > it after this series.
>> 
>> Ah, this is a good point.
>> 
>> I think the point of comparing mTHP zswap v.s mTHP (SSD)swap is more
>> of a sanity check. IOW, if mTHP swap outperforms mTHP zswap, then
>> something is wrong (otherwise why would enable zswap - might as well
>> just use swap, since SSD swap with mTHP >>> zswap with mTHP >>> zswap
>> without mTHP).
>> 
>> That said, I don't think this benchmark can show it anyway. The access
>> pattern here is such that all the allocated memories are really cold,
>> so swap to disk (or to zram, which does not account memory usage
>> towards cgroup) is better by definition... And Kanchana does not seem
>> to have access to setup with larger SSD swapfiles? :)
>
> As follow up, I created a swapfile on disk to increase the SSD swap to 179G.

Are you sure you used swapfile instead of a swap partition?  From the
following code in scan_swap_map_slots(),

	if (order > 0) {
		/*
		 * Should not even be attempting large allocations when huge
		 * page swap is disabled.  Warn and fail the allocation.
		 */
		if (!IS_ENABLED(CONFIG_THP_SWAP) ||
		    nr_pages > SWAPFILE_CLUSTER) {
			VM_WARN_ON_ONCE(1);
			return 0;
		}

		/*
		 * Swapfile is not block device or not using clusters so unable
		 * to allocate large entries.
		 */
		if (!(si->flags & SWP_BLKDEV) || !si->cluster_info)
			return 0;
	}

large folio will be split for swapfile.

--
Best Regards,
Huang, Ying

>  64KB mTHP (cgroup memory.high set to 40G, no swap limit):
>  =========================================================
>  CONFIG_THP_SWAP=Y
>  Sapphire Rapids server with 503 GiB RAM and 179G SSD swap backing device
>  for zswap.
>
>  usemem --init-time -w -O --sleep 0 -n 70 1g:
>
>  -------------------------------------------------------------------------------
>                     mm-unstable 9-17-2024           zswap-mTHP v6     Change wrt
>                                  Baseline                               Baseline
>                                  "before"                 "after"      (sleep 0)
>  -------------------------------------------------------------------------------
>  ZSWAP compressor       zstd     deflate-        zstd    deflate-  zstd deflate-
>                                       iaa                     iaa            iaa
>  -------------------------------------------------------------------------------
>  Throughput (KB/s)    93,273       88,496     143,117     134,131    53%     52%
>  sys time (sec)       316.68       349.00      917.88      877.74  -190%   -152%
>  memcg_high           73,836       83,522     126,120     133,013
>  memcg_swap_fail     261,136      324,533     494,191     578,824
>  pswpin                   16           11           0           0
>  pswpout           1,242,187    1,263,493           0           0
>  zswpin                  694          668         712         702
>  zswpout           3,991,403    4,933,901   9,289,092  10,461,948
>  thp_swpout                0            0           0           0
>  thp_swpout_               0            0           0           0
>   fallback
>  pgmajfault            3,488        3,353       3,377       3,499
>  ZSWPOUT-64kB            n/a          n/a     110,067     103,957
>  SWPOUT-64kB          77,637       78,968           0           0
>  -------------------------------------------------------------------------------
>
> We do see 50% throughput improvement with mTHP-zswap wrt mTHP-SSD.
> The sys time increase can be attributed to higher swapout activity
> occurring with zswap-mTHP.
>
> I hope this quantifies the benefit of mTHP-zswap wrt mTHP-SSD in a
> non-swap-constrained setup. The 4G SSD swap setup data I shared
> in my response to Yosry also indicates better throughput with mTHP-zswap
> as compared to mTHP-SSD.
>
> Please do let me know if you have any other questions/suggestions.
>
> Thanks,
> Kanchana
>
>> 
>> >
>> > If we really want to compare CONFIG_THP_SWAP on before and after, it
>> > should be with SSD because that's a more conventional setup. In this
>> > case the users that have CONFIG_THP_SWAP=y only experience the
>> > benefits of zswap with this series. You mentioned experimenting with
>> > usemem to keep the memory allocated longer so that you're able to have
>> > a fair test with the small SSD swap setup. Did that work?
>> >
>> > I am hoping Nhat or Johannes would shed some light on whether they
>> > usually have CONFIG_THP_SWAP enabled or not with zswap. I am trying to
>> > figure out if any reasonable setups enable CONFIG_THP_SWAP with zswap.
>> > Otherwise the testing results from case 1 should be sufficient.
>> >
>> > >
>> > > In my opinion, even though the test set up does not provide an accurate
>> > > way for a direct before/after comparison (because of zswap usage being
>> > > counted in cgroup, hence towards the memory.high), it still seems
>> > > reasonable for zswap_store to support (m)THP, so that further
>> performance
>> > > improvements can be implemented.
>> >
>> > This is only referring to the results of case 2, right?
>> >
>> > Honestly, I wouldn't want to merge mTHP swapout support on its own
>> > just because it enables further performance improvements without
>> > having actual patches for them. But I don't think this captures the
>> > results accurately as it dismisses case 1 results (which I think are
>> > more reasonable).
>> >
>> > Thnaks

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ