lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGsJ_4zTpcBj_0uC9v4YOHihx-vEek+Y6rr=M1noijwbhfBw7A@mail.gmail.com>
Date: Tue, 11 Jun 2024 12:23:41 +1200
From: Barry Song <21cnbao@...il.com>
To: Shakeel Butt <shakeel.butt@...ux.dev>
Cc: Chuanhua Han <chuanhuahan@...il.com>, Ryan Roberts <ryan.roberts@....com>, 
	akpm@...ux-foundation.org, linux-mm@...ck.org, chengming.zhou@...ux.dev, 
	chrisl@...nel.org, david@...hat.com, hannes@...xchg.org, kasong@...cent.com, 
	linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org, 
	mhocko@...e.com, nphamcs@...il.com, shy828301@...il.com, steven.price@....com, 
	surenb@...gle.com, wangkefeng.wang@...wei.com, willy@...radead.org, 
	xiang@...nel.org, ying.huang@...el.com, yosryahmed@...gle.com, 
	yuzhao@...gle.com, Chuanhua Han <hanchuanhua@...o.com>, 
	Barry Song <v-songbaohua@...o.com>
Subject: Re: [RFC PATCH v3 5/5] mm: support large folios swapin as a whole

On Tue, Jun 11, 2024 at 8:43 AM Shakeel Butt <shakeel.butt@...ux.dev> wrote:
>
> On Thu, Mar 14, 2024 at 08:56:17PM GMT, Chuanhua Han wrote:
> [...]
> > >
> > > So in the common case, swap-in will pull in the same size of folio as was
> > > swapped-out. Is that definitely the right policy for all folio sizes? Certainly
> > > it makes sense for "small" large folios (e.g. up to 64K IMHO). But I'm not sure
> > > it makes sense for 2M THP; As the size increases the chances of actually needing
> > > all of the folio reduces so chances are we are wasting IO. There are similar
> > > arguments for CoW, where we currently copy 1 page per fault - it probably makes
> > > sense to copy the whole folio up to a certain size.
> > For 2M THP, IO overhead may not necessarily be large? :)
> > 1.If 2M THP are continuously stored in the swap device, the IO
> > overhead may not be very large (such as submitting bio with one
> > bio_vec at a time).
> > 2.If the process really needs this 2M data, one page-fault may perform
> > much better than multiple.
> > 3.For swap devices like zram,using 2M THP might also improve
> > decompression efficiency.
> >
>
> Sorry for late response, do we have any performance data backing the
> above claims particularly for zswap/swap-on-zram cases?

no need to say sorry. You are always welcome to give comments.

this, combining with zram modification, not only improves compression
ratio but also reduces CPU time significantly. you may find some data
here[1].

granularity   orig_data_size   compr_data_size   time(us)
4KiB-zstd      1048576000       246876055        50259962
64KiB-zstd     1048576000       199763892        18330605

On mobile devices, We tested the performance of swapin by running
100 iterations of swapping in 100MB of data ,and the results were
as follows.the swapin speed increased by about 45%.

                time consumption of swapin(ms)
lz4 4k                  45274
lz4 64k                 22942

zstdn 4k                85035
zstdn 64k               46558

[1] https://lore.kernel.org/linux-mm/20240327214816.31191-1-21cnbao@gmail.com/

Thanks
Barry

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ