[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7bnmkuodymm33yclp6e5oir2sqnqmpwlsb5qlxqyawszb5bvlu@l63wu3ckqihc>
Date: Wed, 7 Jan 2026 13:28:11 +0900
From: Sergey Senozhatsky <senozhatsky@...omium.org>
To: zhangdongdong <zhangdongdong925@...a.com>
Cc: Sergey Senozhatsky <senozhatsky@...omium.org>,
Andrew Morton <akpm@...ux-foundation.org>, Richard Chang <richardycc@...gle.com>,
Minchan Kim <minchan@...nel.org>, Brian Geffon <bgeffon@...gle.com>,
David Stevens <stevensd@...gle.com>, linux-kernel@...r.kernel.org, linux-mm@...ck.org,
linux-block@...r.kernel.org, Minchan Kim <minchan@...gle.com>
Subject: Re: [PATCHv2 1/7] zram: introduce compressed data writeback
On (26/01/07 11:50), zhangdongdong wrote:
> Hi Sergey,
>
> Thanks for the work on decompression-on-demand.
>
> One concern I’d like to raise is the use of a workqueue for readback
> decompression. In our measurements, deferring decompression to a worker
> introduces non-trivial scheduling overhead, and under memory pressure
> the added latency can be noticeable (tens of milliseconds in some cases).
The problem is those bio completions happen in atomic context, and zram
requires both compression and decompression to be non-atomic. And we
can't do sync read on the zram side, because those bio-s are chained.
So the current plan is to look how system hi-prio per-cpu workqueue
will handle this.
Did you try high priority workqueue?
Powered by blists - more mailing lists