lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <y2jpx4xcl34xxrh76jms7wojyhvjvigto4phmdek2ewbcyq32f@2owu5ndtama7>
Date: Wed, 12 Mar 2025 14:19:02 +0900
From: Sergey Senozhatsky <senozhatsky@...omium.org>
To: Qun-wei Lin (林群崴) <Qun-wei.Lin@...iatek.com>
Cc: "21cnbao@...il.com" <21cnbao@...il.com>, 
	"senozhatsky@...omium.org" <senozhatsky@...omium.org>, 
	Chinwen Chang (張錦文) <chinwen.chang@...iatek.com>, 
	Andrew Yang (楊智強) <Andrew.Yang@...iatek.com>, Casper Li (李中榮) <casper.li@...iatek.com>, 
	"nphamcs@...il.com" <nphamcs@...il.com>, "chrisl@...nel.org" <chrisl@...nel.org>, 
	James Hsu (徐慶薰) <James.Hsu@...iatek.com>, 
	AngeloGioacchino Del Regno <angelogioacchino.delregno@...labora.com>, "akpm@...ux-foundation.org" <akpm@...ux-foundation.org>, 
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>, 
	"linux-mediatek@...ts.infradead.org" <linux-mediatek@...ts.infradead.org>, "ira.weiny@...el.com" <ira.weiny@...el.com>, 
	"linux-mm@...ck.org" <linux-mm@...ck.org>, "dave.jiang@...el.com" <dave.jiang@...el.com>, 
	"vishal.l.verma@...el.com" <vishal.l.verma@...el.com>, "schatzberg.dan@...il.com" <schatzberg.dan@...il.com>, 
	"viro@...iv.linux.org.uk" <viro@...iv.linux.org.uk>, "ryan.roberts@....com" <ryan.roberts@....com>, 
	"minchan@...nel.org" <minchan@...nel.org>, "axboe@...nel.dk" <axboe@...nel.dk>, 
	"linux-block@...r.kernel.org" <linux-block@...r.kernel.org>, "kasong@...cent.com" <kasong@...cent.com>, 
	"nvdimm@...ts.linux.dev" <nvdimm@...ts.linux.dev>, 
	"linux-arm-kernel@...ts.infradead.org" <linux-arm-kernel@...ts.infradead.org>, "matthias.bgg@...il.com" <matthias.bgg@...il.com>, 
	"ying.huang@...el.com" <ying.huang@...el.com>, "dan.j.williams@...el.com" <dan.j.williams@...el.com>
Subject: Re: [PATCH 0/2] Improve Zram by separating compression context from
 kswapd

On (25/03/11 14:12), Qun-wei Lin (林群崴) wrote:
> > > If compression kthread-s can run (have CPUs to be scheduled on).
> > > This looks a bit like a bottleneck.  Is there anything that
> > > guarantees forward progress?  Also, if compression kthreads
> > > constantly preempt kswapd, then it might not be worth it to
> > > have compression kthreads, I assume?
> >
> > Thanks for your critical insights, all of which are valuable.
> >
> > Qun-Wei is likely working on an Android case where the CPU is
> > relatively idle in many scenarios (though there are certainly cases
> > where all CPUs are busy), but free memory is quite limited.
> > We may soon see benefits for these types of use cases. I expect
> > Android might have the opportunity to adopt it before it's fully
> > ready upstream.
> >
> > If the workload keeps all CPUs busy, I suppose this async thread
> > won’t help, but at least we might find a way to mitigate regression.
> >
> > We likely need to collect more data on various scenarios—when
> > CPUs are relatively idle and when all CPUs are busy—and
> > determine the proper approach based on the data, which we
> > currently lack :-)

Right.  The scan/unmap can move very fast (a rabbit) while the
compressor can move rather slow (a tortoise.)  There is some
benefit in the fact that kswap does compression directly, I'd
presume.

Another thing to consider, perhaps, is that not every page is
necessarily required to go through the compressor queue and stay
there until the woken-up compressor finally picks it up just to
realize that the page is filled with 0xff (or any other pattern).
At least on the zram side such pages are not compressed and stored
as an 8-byte pattern in the zram meta table (w/o using any zsmalloc
memory.)

> > > If we have a pagefault and need to map a page that is still in
> > > the compression queue (not compressed and stored in zram yet, e.g.
> > > dut to scheduling latency + slow compression algorithm) then what
> > > happens?
> >
> > This is happening now even without the patch?  Right now we are
> > having 4 steps:
> > 1. add_to_swap: The folio is added to the swapcache.
> > 2. try_to_unmap: PTEs are converted to swap entries.
> > 3. pageout: The folio is written back.
> > 4. Swapcache is cleared.
> >
> > If a swap-in occurs between 2 and 4, doesn't that mean
> > we've already encountered the case where we hit
> > the swapcache for a folio undergoing compression?
> >
> > It seems we might have an opportunity to terminate
> > compression if the request is still in the queue and
> > compression hasn’t started for a folio yet? seems
> > quite difficult to do?
> 
> As Barry explained, these folios that are being compressed are in the
> swapcache. If a refault occurs during the compression process, its
> correctness is already guaranteed by the swap subsystem (similar to
> other asynchronous swap devices).

Right.  I just was thinking that now there is a wake_up between
scan/unmap and compress.  Not sure how much trouble this can make.

> Indeed, terminating a folio that is already in the queue waiting for
> compression is a challenging task. Will this require some modifications
> to the current architecture of swap subsystem?

Yeah, I'll leave it mm folks to decide :)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ