[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <unyov4aypoaotj56m5scgd4qtjfn2mceb4zdmtaek42dfqaq3t@lrrqwojlmudp>
Date: Thu, 8 May 2025 14:58:14 +0900
From: Sergey Senozhatsky <senozhatsky@...omium.org>
To: Christoph Hellwig <hch@...radead.org>
Cc: Sergey Senozhatsky <senozhatsky@...omium.org>,
Yosry Ahmed <yosry.ahmed@...ux.dev>, Vitaly Wool <vitaly.wool@...sulko.se>, linux-mm@...ck.org,
akpm@...ux-foundation.org, linux-kernel@...r.kernel.org, Nhat Pham <nphamcs@...il.com>,
Shakeel Butt <shakeel.butt@...ux.dev>, Johannes Weiner <hannes@...xchg.org>,
Minchan Kim <minchan@...nel.org>, Igor Belousov <igor.b@...dev.am>,
Herbert Xu <herbert@...dor.apana.org.au>
Subject: Re: [PATCH] mm/zblock: use vmalloc for page allocations
On (25/05/06 23:54), Christoph Hellwig wrote:
> On Wed, May 07, 2025 at 03:08:08PM +0900, Sergey Senozhatsky wrote:
> > > This sounds interesting. We might get rid of lots of memcpy()
> > > in object read/write paths, and so on. I don't know if 0-order
> > > chaining was the only option for zsmalloc, or just happened to
> > > be the first one.
> >
> > I assume we might have problems with zspage release path. vfree()
> > should break .swap_slot_free_notify, as far as I can see.
> > .swap_slot_free_notify is called under swap-cluster spin-lock,
> > so if we free the last object in the zspage we cannot immediately
> > free that zspage, because vfree() might_sleep().
>
> Note that swap_slot_free_notify really needs to go away in favor
> of just sending a discard bio. Having special block ops for a
> single user bypassing the proper block interface is not sustainable.
Oh, I didn't realize that zram was the only swap_slot_free_notify
user. zram already handles REQ_OP_DISCARD/REQ_OP_WRITE_ZEROES so
I guess only swap-cluster needs some work. Are there any
blockers/complications on the swap-cluster side?
Powered by blists - more mailing lists