[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aswaagdlczqq3sh2okdew2o5jtzmev5ghdz4ksvzmqkfsshbfw@aoxdptshkqvu>
Date: Tue, 13 Jan 2026 13:54:02 +0900
From: Sergey Senozhatsky <senozhatsky@...omium.org>
To: zhangdongdong <zhangdongdong925@...a.com>
Cc: Sergey Senozhatsky <senozhatsky@...omium.org>,
Jens Axboe <axboe@...nel.dk>, Andrew Morton <akpm@...ux-foundation.org>,
Richard Chang <richardycc@...gle.com>, Minchan Kim <minchan@...nel.org>,
Brian Geffon <bgeffon@...gle.com>, David Stevens <stevensd@...gle.com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org, linux-block@...r.kernel.org,
Minchan Kim <minchan@...gle.com>, xiongping1@...omi.com, huangjianan@...omi.com,
wanghui33@...omi.com
Subject: Re: [PATCHv2 1/7] zram: introduce compressed data writeback
Hi,
On (26/01/08 18:36), zhangdongdong wrote:
[..]
> > I don't know if solving it on zram side alone is possible. Maybe we
> > can get some help from the block layer: some sort of two-stage bio
> > submission. First stage: submit chained bio-s, second stage: iterate
> > over all submitted and completed bio-s and decompress the data. Again,
> > just thinking out loud.
> >
>
> Hi Sergey,
>
> My thinking is largely aligned with yours. I agree that relying on zram
> alone is unlikely to fully solve this problem, especially without going
> back to atomic read/write.
>
> Our current mitigation approach is to introduce a hook at the swap layer
> and move decompression there. By doing so, decompression happens in a
> fully sleepable context, which avoids the atomic-context constraints
> you outlined. This helps us sidestep the core issue rather than trying
> to force decompression back into zram completion paths.
This approach is limited to swap use-cases only, while zram
is a generic block device: one can mkfs on it and use it as
a normal block device. So, this is not a complete solution.
[..]
> If I recall correctly, this issue first became noticeable after a block
> layer change was merged; I can try to dig that up and share more details
> later.
Interesting.
Powered by blists - more mailing lists